You are on page 1of 245

This PDF is available at http://nap.nationalacademies.

org/24943

Indicators for Monitoring Undergraduate STEM


Education (2018)

DETAILS
244 pages | 6 x 9 | PAPERBACK
ISBN 978-0-309-46788-9 | DOI 10.17226/24943

CONTRIBUTORS
Mark B. Rosenberg, Margaret L. Hilton, and Kenne A. Dibner, Editors; Committee
on Developing Indicators for Undergraduate STEM Education; Board on Science
BUY THIS BOOK Education; Division of Behavioral and Social Sciences and Education; National
Academies of Sciences, Engineering, and Medicine

FIND RELATED TITLES SUGGESTED CITATION


National Academies of Sciences, Engineering, and Medicine 2018. Indicators for
Monitoring Undergraduate STEM Education. Washington, DC: The National
Academies Press. https://doi.org/10.17226/24943.

Visit the National Academies Press at nap.edu and login or register to get:
– Access to free PDF downloads of thousands of publications
– 10% off the price of print publications
– Email or social media notifications of new titles related to your interests
– Special offers and discounts

All downloadable National Academies titles are free to be used for personal and/or non-commercial
academic use. Users may also freely post links to our titles on this website; non-commercial academic
users are encouraged to link to the version on this website rather than distribute a downloaded PDF
to ensure that all users are accessing the latest authoritative version of the work. All other uses require
written permission. (Request Permission)

This PDF is protected by copyright and owned by the National Academy of Sciences; unless otherwise
indicated, the National Academy of Sciences retains copyright to all materials in this PDF with all rights
reserved.
Indicators for Monitoring Undergraduate STEM Education

Indicators
for Monitoring
Undergraduate
STEM
Education
Committee on Developing Indicators for Undergraduate STEM Education

Mark B. Rosenberg, Margaret L. Hilton, and Kenne A. Dibner, Editors

Board on Science Education

Division of Behavioral and Social Sciences and Education

A Consensus Study Report of

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

THE NATIONAL ACADEMIES PRESS  500 Fifth Street, NW  Washington, DC 20001

This study was supported by Contract/Grant No. 1533989 between the National
Academy of Sciences and the National Science Foundation. Any opinions, findings,
conclusions, or recommendations expressed in this publication do not necessarily
reflect the views of any organization or agency that provided support for the project.

International Standard Book Number-13:  978-0-309-46788-9


International Standard Book Number-10:  0-309-46788-8
Library of Congress Control Number:  2018930772
Digital Object Identifier: https://doi.org/10.17226/24943

Additional copies of this publication are available for sale from the National Acad-
emies Press, 500 Fifth Street, NW, Keck 360, Washington, DC 20001; (800) 624-
6242 or (202) 334-3313; http://www.nap.edu.

Copyright 2018 by the National Academy of Sciences. All rights reserved.

Printed in the United States of America

Suggested citation: National Academies of Sciences, Engineering, and Medicine.


(2018). Indicators for Monitoring Undergraduate STEM Education. Washington,
DC: The National Academies Press. doi: https://doi.org/10.17226/24943.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

The National Academy of Sciences was established in 1863 by an Act of


Congress, signed by President Lincoln, as a private, nongovernmental institu-
tion to advise the nation on issues related to science and ­technology. Members
are elected by their peers for outstanding contributions to research. Dr. Marcia
McNutt is president.

The National Academy of Engineering was established in 1964 under the char-
ter of the National Academy of Sciences to bring the practices of engineering
to advising the nation. Members are elected by their peers for extraordinary
contributions to engineering. Dr. C. D. Mote, Jr., is president.

The National Academy of Medicine (formerly the Institute of Medicine) was


established in 1970 under the charter of the National Academy of ­Sciences to
advise the nation on medical and health issues. Members are elected by their
peers for distinguished contributions to medicine and health. Dr. Victor J. Dzau
is president.

The three Academies work together as the National Academies of Sciences,


Engineering, and Medicine to provide independent, objective analysis and
advice to the nation and conduct other activities to solve complex problems and
inform public policy decisions. The National Academies also encourage education
and research, recognize outstanding contributions to knowledge, and increase
public understanding in matters of science, engineering, and medicine.

Learn more about the National Academies of Sciences, Engineering, and Medicine
at www.nationalacademies.org.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Consensus Study Reports published by the National Academies of Sciences,


Engineering, and Medicine document the evidence-based consensus on the
study’s statement of task by an authoring committee of experts. Reports typically
include findings, conclusions, and recommendations based on information
gathered by the committee and the committee’s deliberations. Each report
has been subjected to a rigorous and independent peer-review process and it
represents the position of the National Academies on the statement of task.

Proceedings published by the National Academies of Sciences, Engineering, and


Medicine chronicle the presentations and discussions at a workshop, symposium,
or other event convened by the National Academies. The statements and opin-
ions contained in proceedings are those of the participants and are not endorsed
by other participants, the planning committee, or the National Academies.

For information about other products and activities of the National Academies,
please visit www.nationalacademies.org/about/whatwedo.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

COMMITTEE ON DEVELOPING INDICATORS FOR


UNDERGRADUATE STEM EDUCATION

Mark B. Rosenberg (Chair), Florida International University


Heather Belmont, School of Science, Miami Dade College
Charles Blaich, Center of Inquiry and the Higher Education Data Sharing
Consortium, Wabash College
Mark Connolly, Wisconsin Center for Education Research, University of
Wisconsin–Madison
Stephen Director, Northeastern University (Provost Emeritus)
Kevin Eagan, Higher Education Research Institute, University of
California, Los Angeles
Susan Elrod, Academic Affairs, University of Wisconsin–Whitewater
Kaye Husbands Fealing, School of Public Policy, Georgia Institute of
Technology
Stuart Feldman, Schmidt Sciences, Schmidt Philanthropies, Palo Alto, CA
Charles Henderson, Department of Physics and Mallinson Institute for
Science Education, Western Michigan University
Lindsey Malcom-Piqueux, Center for Urban Education, University of
Southern California
Marco Molinaro, Center for Educational Effectiveness, University of
California, Davis
Rosa Rivera-Hainaj, Academic Affairs, Our Lady of the Lake University,
San Antonio
Gabriela Weaver, Faculty Development, University of Massachusetts,
Amherst
Yu Xie, Princeton Institute for International and Regional Studies,
Princeton University

Margaret Hilton, Study Director


Kenne Dibner, Deputy Study Director
Brenezza DaParre Garcia, Consultant
Leticia Garcilazo Green, Senior Program Assistant
Heidi Schweingruber, Director, Board on Science Education

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

BOARD ON SCIENCE EDUCATION

Adam Gamoran (Chair), William T. Grant Foundation, New York, NY


Sunita V. Cooke, MiraCosta College
Melanie Cooper, Department of Chemistry, Michigan State University
Rodolfo Dirzo, Department of Biology, Stanford University
Rush Holt, American Association for the Advancement of Science,
Washington, DC
Matthew Krehbiel, Achieve, Inc., Washington, DC
Michael Lach, Urban Education Institute, University of Chicago
Lynn S. Liben, Department of Psychology, Pennsylvania State University
Cathy Manduca, Science Education Resource Center, Carleton College
John Mather, Goddard Space Flight Center, National Aeronautics and
Space Administration
Tonya Matthews, Michigan Science Center, Detroit
Brian Reiser, School of Education and Social Policy, Northwestern
University
Marshall “Mike” Smith, Carnegie Foundation for the Advancement of
Teaching, Stanford, CA
Roberta Tanner, Thompson School District (retired), Loveland, CO
Suzanne Wilson, Neag School of Education, University of Connecticut

Heidi Schweingruber, Director

vi

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Acknowledgments

T
his Consensus Study Report represents the work of many indi­viduals,
especially those who served on the committee and participated in
the committee’s open sessions. The first thanks are to the committee
members for their deep knowledge and contributions to the study.
This report was made possible by the important contributions of the
National Science Foundation (NSF). We particularly thank Susan Singer,
the former director of NSF’s Division of Undergraduate Education, who
requested the study.
The committee benefited from presentations by, and discussions with,
the many individuals who participated in our three fact-finding meetings,
in January, February, and April 2016. We thank Alicia Dowd, ­Pennsylvania
State University; Jeff Gold, California State University Office of the
Chancellor; Beethika Khan, National Center for Science and Engineering
Statistics; Shirley Malcom, American Association for the Advancement of
Science; Jordan Matsudaira, Cornell University; Alexei Matveev, South-
ern Association of Colleges and Schools; Emily Miller and Josh Trapani,
Association of American Universities; Chris Rasmussen, San Diego State
University; and Matthew Wilson, National Science Board.
The committee also thanks the experts who discussed the public com-
ment draft during the committee’s October 2016 public meeting: Susan
Ambrose, Northeastern University, Boston, Massachusetts; Mica Estrada,
University of California, San Francisco; Adam Gamoran, William T. Grant
Foundation; Jillian Kinzie, Indiana University; Annette Parker, South ­Central
College, Minnesota; Kacy Redd, Association of Public and Land-Grant Uni-

vii

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

viii ACKNOWLEDGMENTS

versities; Deborah Santiago, Excelencia in Education; Susan Singer, Rollins


College; Linda Slakey, University of Massachusetts, Amherst, Coalition for
Reform of Undergraduate STEM Education; and Lee Zia, NSF Division of
Undergraduate Education. In addition, the committee benefited from the
many individuals and organizations that provided written comments on
the public comment draft.
This Consensus Study Report has been reviewed in draft form by
individuals chosen for their diverse perspectives and technical expertise.
The purpose of this independent review is to provide candid and critical
comments that will assist the National Academies of Sciences, Engineer-
ing, and Medicine in making its published report as sound as possible and
to ensure that it meets institutional standards for objectivity, evidence,
and responsiveness to the study charge. The review comments and draft
manuscript remain confidential to protect the integrity of the deliberative
process. We thank the following individuals for their review of this report:
Ann Austin, Department of Educational Administration, Michigan State
University; George R. Boggs, Palomar College (president emeritus), San
Marcos, California; Linnea Fletcher, Department of Biotechnology, Austin
Community College, Austin, Texas; Adam Gamoran, president, W.T. Grant
Foundation, New York, New York; Judith Harackiewicz, Department of
Psychology, University of Wisconsin–Madison; Joan Herman, Graduate
School of Education and Information Studies, University of California, Los
Angeles; Paul R. Hernandez, Department of Learning Science and Human
Development, West Virginia University; Monika E. Kress, Department of
Physics and Astronomy, San Jose State University; Sally F. Mason, Univer-
sity of Iowa (president emerita); Andrew M. Penner, School of Social Sci-
ences, University of California, Irvine; and Carl E. Wieman, Department of
Physics, Stanford University
Although the reviewers listed above provided many constructive com-
ments and suggestions, they were not asked to endorse the content of the
report nor did they see the final draft of the report before its release. The
review of this report was overseen by Greg J. Duncan, School of Education,
University of California, Irvine, and Paul R. Gray, Department of Electri-
cal Engineering and Computer Sciences, University of California, Berkeley
(emeritus). They were responsible for making certain that an independent
examination of this report was carried out in accordance with the standards
of the National Academies and that all review comments were carefully
considered. Responsibility for the final content of this report rests entirely
with the authoring committee and the National Academies.
Thanks are also due to the project staff: Margaret Hilton, Kenne
Dibner, Heidi Schweingruber, and Leticia Garcilazo Green, and to our con-
sultant, Brenezza DaParre Garcia.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

ACKNOWLEDGMENTS ix

Staff of the Division of Behavioral and Social Sciences and Education


also provided help: Eugenia Grohman substantially improved the read-
ability of the report, Kirsten Sampson-Snyder expertly guided the report
through the report review process, and Yvonne Wise masterfully managed
the production of the report.

Mark B. Rosenberg, Chair


Committee on Developing Indicators
for Undergraduate STEM Education

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Contents

SUMMARY 1

1 INTRODUCTION 13
Interpreting the Study Charge, 15
Vision, 15
A Focus on the National Level, 18
Equity, Diversity, and Inclusion, 19
Goals and Objectives, 20
Measures and Indicators, 20
Undergraduate STEM Education, 22
Evidence-Based STEM Educational Practices and Programs, 22
Measuring College Quality in an Era of Accountability, 22
Employment Outcomes, 23
The STEM Workforce, 25
Learning Outcomes, 27
Goals of the Indicator System, 30
Study Approach and Organization of the Report, 31
References, 32

2 CONCEPTUAL FRAMEWORK FOR THE INDICATOR


SYSTEM 37
A Systems View of Higher Education, 38
Goals for Undergraduate STEM Education, 40
Goal 1: Increase Students’ Mastery of STEM Concepts
and Skills, 42

xi

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

xii CONTENTS

Goal 2: Strive for Equity, Diversity, and Inclusion, 43


Goal 3: Ensure Adequate Numbers of STEM Professionals, 43
Articulating Goals as Objectives, 44
The Federal STEM Education Strategic Plan, 44
Criteria for Identifying Objectives, 45
The Objectives, 46
Proposed Indicators, 47
Conclusion, 47
References, 50

3 GOAL 1: INCREASE STUDENTS’ MASTERY OF STEM


CONCEPTS AND SKILLS 55
Objective 1.1: Use of Evidence-Based Educational Practices
Both In and Outside of Classrooms, 57
Importance of the Objective, 57
Proposed Indicators, 66
Objective 1.2: Existence and Use of Supports that Help STEM
Instructors Use Evidence-Based Educational Practices, 67
Importance of the Objective, 67
Proposed Indicators, 69
Objective 1.3: An Institutional Culture that Values
Undergraduate STEM Instruction, 71
Importance of the Objective, 71
Proposed Indicators, 73
Objective 1.4: Continuous Improvement in STEM
Teaching and Learning, 75
Importance of the Objective, 75
Challenges of Measuring Continuous Improvement, 77
References, 79

4 GOAL 2: STRIVE FOR EQUITY, DIVERSITY, AND


INCLUSION 87
Objective 2.1: Equity of Access to High-Quality Undergraduate
STEM Educational Programs and Experiences, 90
Importance of the Objective, 90
Proposed Indicators, 91
Objective 2.2: Representational Diversity among STEM
Credential Earners, 95
Importance of the Objective, 95
Proposed Indicators, 96
Objective 2.3: Representational Diversity among STEM
Instructors, 100

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

CONTENTS xiii

Importance of the Objective, 100


Proposed Indicators, 100
Objective 2.4: Inclusive Environments in Institutions and
STEM Departments, 102
Importance of the Objective, 102
Proposed Indicators, 103
References, 104

5 GOAL 3: ENSURE ADEQUATE NUMBERS OF STEM


PROFESSIONALS 111
Objective 3.1: Adequate Foundational Preparation for STEM
for all Students, 112
Importance of the Objective, 112
Proposed Indicator, 115
Objective 3.2: Successful Navigation into and through STEM
Programs of Study, 116
Importance of the Objective, 116
Proposed Indicators, 119
Objective 3.3: STEM Credential Attainment, 121
Importance of the Objective, 121
Proposed Indicator, 121
References, 122

6 EXISTING DATA SOURCES AND MONITORING SYSTEMS 127


Overview, 127
Public Data Sources, 132
The Integrated Postsecondary Education Data System, 132
The Beginning Postsecondary Students Longitudinal Study, 135
The National Survey of Postsecondary Faculty, 136
National Student Loan Data System, 136
State Unit Record Data Systems, 138
Proprietary Data Sources, 139
National Student Clearinghouse, 139
Higher Education Research Institute Surveys, 140
National Survey of Student Engagement, 143
Community College Survey of Student Engagement, 144
Faculty Survey of Student Engagement, 144
Monitoring Systems, 144
Science and Engineering Indicators, 144
Proprietary Monitoring Systems, 146
Data for Each Indicator, 148

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

xiv CONTENTS

Indicator 1.1.1: Use of Evidence-Based STEM Educational


Practices in Course Development and Delivery, 148
Indicator 1.1.2: Use of Evidence-Based STEM Practices
Outside the Classroom, 156
Indicator 1.2.1: Extent of Instructors’ Involvement in
Professional Development, 157
Indicator 1.2.2: Availability of Support or Incentives for
Evidence-Based Course Development or Course Redesign, 157
Indicator 1.3.1: Use of Valid Measures of Teaching
Effectiveness, 158
Indicator 1.3.2: Consideration of Evidence-Based Teaching in
Personnel Decisions by Departments and Institutions, 159
Indicator 2.1.1: Institutional Structures, Policies, and Practices
That Strengthen STEM Readiness for Entering and Enrolled
College Students, 159
Indicator 2.1.2: Entrance to and Persistence in STEM
Academic Programs, 160
Indicator 2.1.3: Equitable Student Participation in
Evidence-Based STEM Educational Programs and
Experiences, 161
Indicator 2.2.1: Diversity of STEM Degree and Certificate
Earners in Comparison with Diversity of Degree and Certificate
Earners in All Fields, 162
Indicator 2.2.2: Diversity of Students Transferring from 2-Year to
4-Year STEM Programs in Comparison with Diversity of
Students in 2-Year STEM Programs, 162
Indicator 2.2.3: Time-to-Degree for Students in STEM
Academic Programs, 163
Indicator 2.3.1: Diversity of STEM Instructors in
Comparison with the Diversity of STEM Graduate Degree
Holders, 164
Indicator 2.3.2: Diversity of STEM Graduate Student
Instructors in Comparison with the Diversity of STEM
Graduate Students, 164
Indicator 2.4.1: Students Pursuing STEM Credentials Feel
Included and Supported in Their Academic Programs and
Departments, 164
Indicator 2.4.2: Instructors Teaching Courses in STEM
Disciplines Feel Included and Supported in Their
Departments, 164
Indicator 2.4.3: Institutional Practices Are Culturally
Responsive, Inclusive, and Consistent across the
Institution, 166

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

CONTENTS xv

Indicator 3.1.1: Completion of Foundational Courses,


Including Developmental Education Courses, to Ensure STEM
Program Readiness, 166
Indicator 3.2.1: Retention in STEM Degree or Certificate
Programs, Course to Course and Year to Year, 167
Indicator 3.2.2: Transfers from 2-Year to 4-Year STEM Programs
in Comparison with Transfers to All 4-Year Programs, 169
Indicator 3.3.1: Percentage of Students Who Attain STEM
Credentials over Time, Disaggregated by Institution Type,
Transfer Status, and Demographic Characteristics, 169
Summary and Conclusions, 170
References, 172

7 IMPLEMENTING THE INDICATOR SYSTEM 177


Option 1: Create a National Student Unit Record Data System, 177
Option 2: Expand NCES Data Collections, 184
Expanding IPEDS, 187
Expanding the Beginning Postsecondary Students
Longitudinal Study, 190
Renewing and Expanding the National Study of
Postsecondary Faculty, 191
Option 3: Combine Existing Data from Nonfederal Sources, 192
Conclusions, 196
Research, Evaluation, and Updating of the Proposed
Indicator System, 197
A Note of Caution, 198
References, 199

APPENDIXES
A Public Comments on Draft Report and Committee Response 201
B Possible Formulas for Calculating Selected Indicators 209
C Agendas: Workshop and Public Comment Meeting 215
D Biographical Sketches of Committee Members and Staff 221

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Summary

S
cience, technology, engineering, and mathematics (STEM) profes-
sionals generate a stream of scientific discoveries and technological
innovations that fuel job creation and national economic growth.
Undergraduate STEM education prepares graduates for today’s STEM
professions and those of tomorrow, while also helping all students develop
knowledge and skills they can draw on in a variety of occupations and
as citizens. However, many capable students intending to major in STEM
switch to another field or drop out of higher education altogether, partly
because of documented weaknesses in STEM teaching, learning, and stu-
dent supports. More than 5 years ago, the President’s Council of Advisors
in Science and Technology (PCAST) wrote that improving undergraduate
STEM education to address these weaknesses is a national imperative.
Many initiatives are now under way to improve the quality of under-
graduate STEM teaching and learning. Some focus on the national level,
others involve multi-institution collaborations, and others take place on
individual campuses. At present, however, policy makers and the public do
not know whether these various initiatives are accomplishing their goals
and leading to nationwide improvement in undergraduate STEM educa-
tion. Recognizing this challenge, PCAST recommended that the National
Academies of Sciences, Engineering, and Medicine develop metrics to evalu-
ate undergraduate STEM education. In response, the National Science
Foundation charged the National Academies to conduct a study of indica-
tors that could be used to monitor the status and quality of undergraduate
STEM education over time.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

2 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

The committee was charged to outline a framework and a set of indica-


tors to document the status and quality of undergraduate STEM education
at the national level over multiple years (see Box 1-1 for the full study
charge). In Phase I, the committee would identify objectives for improving
undergraduate STEM education at both 2-year and 4-year institutions; re-
view existing systems for monitoring undergraduate STEM education; and
develop a conceptual framework for the indicator system. In Phase II, the
committee would develop a set of indicators linked to the objectives identi-
fied in Phase I; identify existing and additional measures to track progress
toward the objectives; discuss the feasibility of including such measures in
existing data collection programs; identify additional research needed to
fully develop the indicators; and make recommendations regarding the roles
of various federal and state institutions in supporting the needed research
and data collection.
In addressing its charge, the committee focused on national-level in-
dicators. This focus was important because current undergraduate STEM
education reform initiatives tend to gather detailed, local data that are use-
ful and appropriate for local feedback and improvement at the individual,
departmental, institutional, or system level. However, such detailed data
are not adequate for providing a broad, national picture of STEM teaching
and learning.

CONCEPTUAL FRAMEWORK FOR THE INDICATOR SYSTEM


Drawing on organizational theory and research, the committee de-
veloped a basic model of higher education: see Figure S-1. The model
represents undergraduate education as a complex system comprising four
interrelated components: inputs, incoming students; processes, students’
educational experiences inside and outside the classroom; outcomes, includ-
ing mastery of STEM concepts and skills and completion of STEM creden-
tials; and environment, the structural and cultural features of academic
departments and institutions. The environment surrounds and influences
the processes and outcomes, and the inputs, processes, and environment
all influence student outcomes.
With this model as a framework, the committee considered the current
status of undergraduate STEM education and what would be required to
improve it.

CONCLUSION 1  Improving the quality and impact of undergraduate


science, technology, engineering, and mathematics (STEM) education
will require progress toward three overarching goals:

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

SUMMARY 3

• Goal 1: Increase students’ mastery of STEM concepts and skills by


engaging them in evidence-based STEM educational practices and
programs.
• Goal 2: Strive for equity, diversity, and inclusion of STEM students
and instructors by providing equitable opportunities for access and
success.
• Goal 3: Ensure adequate numbers of STEM professionals by in-
creasing completion of STEM credentials as needed in the different
disciplines.

As shown in Figure S-1, these goals are interconnected and mutually


supportive. They target improvement in various components of the under-
graduate education system and interactions among these components that
together will enhance students’ success in STEM education, whether they
are taking general education courses or pursuing a STEM credential. Some
policy makers, parents, and students are particularly concerned about stu-
dents’ outcomes, especially the employment outcomes included in the third
goal. However, attaining this goal is not possible without first attending
to the STEM educational processes and environment reflected in the first
and second goals. Using evidence-based practices (Goal 1) and striving
for equity, diversity, and inclusion (Goal 2) will help to ensure adequate

Environment
1.2. Supports for Evidence-Based
Practices Outcomes
1.3. Institutional Culture That (Graduates with
Values Undergraduate STEM STEM Knowledge
2.3. Representational Diversity and Skills)
among STEM Instructors 2.2.
Inputs 2.4. Inclusive Institutions and Representational
2.1. Equity of Access to STEM Departments Equity in STEM
High-Quality
Undergraduate STEM Educational Credential
Attainment
Education. Processes 3.3. STEM
1.1. Use of Evidence-
Based Practices Credential
1.4. Continuous Attainment
Improvement
3.1. Foundational
Preparation
3.2. Successful
Navigation

FIGURE S-1  Conceptual framework for the proposed indicator system.


NOTE: The conceptual model focuses on objectives for the institutional environment closest
to students and instructors. Elements of the larger environment (e.g., parents, employers, state
governments) are not included in the model for the purpose of parsimony.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

4 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

numbers of STEM professionals and prepare all graduates with core STEM
knowledge and skills (Goal 3).
The study was conducted in the context of calls for greater account-
ability in higher education and ongoing efforts to define and measure higher
education quality, both generally and at individual colleges and universi-
ties. For example, student learning is a desired educational outcome. This
outcome is reflected in the committee’s goal of increasing all students’
mastery of STEM concepts and skills, whether they are taking general edu-
cation courses or pursuing a STEM credential. Some higher education and
professional associations have developed common disciplinary (or general)
learning goals, along with assessments of students’ progress toward these
goals, to measure quality. However, establishing common learning goals is
challenging because STEM is characterized by rapid discoveries, ongoing
development of new knowledge and skills, and continual emergence of new
subdisciplines and interdisciplinary fields, which result in ongoing changes
to learning goals. In engineering, for example, most graduating students
take the Fundamentals of Engineering Exam, which is offered in seven
different subdisciplines. A common national assessment of core STEM
concepts and skills does not exist. Therefore, the committee proposes to
monitor progress in student learning through objectives and indicators of
the adoption of teaching practices that have been shown by research to
enhance student learning.

OBJECTIVES AND INDICATORS


The committee identified 11 objectives to advance the three overarch-
ing goals and 21 indicators to measure progress toward these objectives:
see Table S-1. The proposed indicators reflect the complexity of the un-
dergraduate education system, which includes a diverse array of student,
instructor,1 departmental, and institutional behaviors, practices, policies,
and perceptions. They range from use of evidence-based STEM educational
practices both in and outside of classrooms (Objective 1.1) to inclusive
environments in institutions and STEM departments (Objective 2.3) to
successful navigation into and through STEM programs (Objective 3.2).
Multiple indicators using multiple measures will be needed to fully cap-
ture national progress toward each objective. For the purpose of practical
feasibility, however, the committee proposes no more than three indica-
tors for each objective. The proposed set of 21 indicators is an important
first step for monitoring trends over time in the quality of undergraduate

1 The committee uses the term “instructors” to refer to all individuals who instruct under-

graduates, including tenured and tenure-track faculty, adjunct and part-time instructors, and
graduate student instructors.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

SUMMARY 5

TABLE S-1  Goals, Objectives, and Indicators to Monitor Progress


in Undergraduate Science, Technology, Engineering, and Mathematics
(STEM) Education
Conceptual Framework Objective Indicator
Goal 1: Increase Students’ Mastery of STEM Concepts and Skills by Engaging Them in
Evidence-Based STEM Educational Practices and Programs
Process 1.1 Use of evidence- 1.1.1 Use of evidence-based
based STEM educational STEM educational practices in
practices both in and course development and delivery
outside of classrooms
1.1.2 Use of evidence-based
STEM educational practices
outside the classroom

Environment 1.2 Existence and use of 1.2.1 Extent of instructors’


supports that help STEM involvement in professional
instructors use evidence- development
based educational practices
1.2.2 Availability of support or
incentives for evidence-based
course development or course
redesign

Environment 1.3 An institutional culture 1.3.1 Use of valid measures of


that values undergraduate teaching effectiveness
STEM instruction
1.3.2 Consideration of evidence-
based teaching in personnel
decisions by departments and
institutions

Process 1.4 Continuous No indicators: see “Challenges


improvement in STEM of Measuring Continuous
teaching and learning Improvement” in Chapter 2

Goal 2: Strive for Equity, Diversity, and Inclusion of STEM Students and Instructors by
Providing Equitable Opportunities for Access and Success
Input 2.1 Equity of access to 2.1.1 Institutional structures,
high-quality undergraduate policies, and practices that
STEM educational strengthen STEM readiness for
programs and experiences entering and enrolled college
students

2.1.2 Entrance to and persistence


in STEM academic programs

2.1.3 Equitable student


participation in evidence-based
STEM educational practices
continued

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

6 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

TABLE S-1  Continued


Conceptual Framework Objective Indicator
Outcome 2.2 Representational 2.2.1 Diversity of STEM
diversity among STEM degree and certificate earners
credential earners in comparison with diversity of
degree and certificate earners in
all fields

2.2.2 Diversity of students who


transfer from 2- to 4-year STEM
programs in comparison with
diversity of students in 2-year
STEM programs

2.2.3 Time to degree for students


in STEM academic programs

Environment 2.3 Representational 2.3.1 Diversity of STEM


diversity among STEM instructors in comparison with
instructors diversity of STEM graduate
degree holders

2.3.2 Diversity of STEM graduate


student instructors in comparison
with diversity of STEM graduate
students

Environment 2.4 Inclusive environments 2.4.1 Students pursuing STEM


in institutions and STEM credentials feel included and
departments supported in their academic
programs and departments

2.4.2 Instructors teaching


courses in STEM disciplines feel
supported and included in their
departments

2.4.3 Institutional practices are


culturally responsive, inclusive,
and consistent across the
institution

Goal 3: Ensure Adequate Numbers of STEM Professionals


Process 3.1 Foundational 3.1.1 Completion of foundational
preparation for STEM for courses, including developmental
all students education courses, to ensure
STEM program readiness

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

SUMMARY 7

TABLE S-1  Continued


Conceptual Framework Objective Indicator
Process 3.2 Successful navigation 3.2.1 Retention in STEM
into and through STEM programs, course to course and
programs of study year to year

3.2.2 Transfers from 2- to 4-year


STEM programs in comparison
with transfers to all 4-year
programs

Outcome 3.3 STEM credential 3.3.1 Number of students who


attainment attain STEM credentials over
time, disaggregated by institution
type, transfer status, and
demographic characteristics

STEM education. In the future, as STEM educational practices evolve, in-


structional and assessment technology advances, and measurement systems
improve, additional indicators may be needed, and some of the proposed
21 indicators may no longer be needed.
The committee’s indicators are best viewed in concert with one another.
Although an individual indicator can provide a discrete marker of the quality
of undergraduate STEM teaching and learning, the conceptual framework
tells a more complete story, demonstrating how relationships across indica-
tors facilitate movement toward the identified objectives and goals.
In light of current pressures for institutional accountability, the com-
mittee stresses that the primary goal of the indicator system is to allow
federal agencies and other stakeholders (e.g., higher education associations)
to monitor the status and quality of undergraduate STEM education over
time, at the national level. The committee envisions that those overseeing
the system might make institutional data accessible to individual institu-
tions, state higher education systems, or groups of institutions for the
purpose of monitoring and improving their own programs. Although such
accessibility might also allow these users to compare or rate institutions
with the goal of holding them accountable, that is not the intended purpose
of the committee’s proposed indicator system.

DATA FOR THE INDICATOR SYSTEM


The committee reviewed existing systems and data sources for monitor-
ing undergraduate STEM education, considering whether they were rep-

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

8 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

resentative of the national universe of students and institutions and could


provide current data for the proposed indicators.

Federal Data Sources


The Integrated Postsecondary Education Data System (IPEDS) obtains
data through regular annual surveys of 2-year and 4-year, public and pri-
vate (both for-profit and nonprofit) institutions. The response rate is nearly
universal because institutions must respond if they wish to remain eligible
for student financial aid. These high-quality current data cover a range of
topics related to the committee’s goals and objectives, including detailed
annual data on completion of degrees and certificates in different fields of
study. However, the IPEDS data focus only on first-time, full-time students
who enter and graduate from a given institution. As a result, they provide
limited capacity to track students’ increasingly complex pathways into and
through STEM programs, including stopping out for a time, transferring
across institutions, and enrolling part time.

CONCLUSION 2  To monitor the status and quality of undergraduate


science, technology, engineering, and mathematics education, federal
data systems will need additional data on full-time and part-time stu-
dents’ trajectories across, as well as within, institutions.

Although they are conducted less frequently than IPEDS, federal longi-
tudinal surveys of student cohorts, such as the 2004/09 Beginning Postsec-
ondary Students Longitudinal Study conducted by the National Center for
Education Statistics (NCES), provide useful data related to the committee’s
objectives and indicators. The survey samples are carefully designed to be
nationally representative, and multiple methods are used to obtain strong
response rates. The resulting data can be used to track students’ trajectories
across institutions and fields of study, including STEM fields. Another for-
merly useful source was NCES’s National Study of Postsecondary Faculty,
which provided data on instructors’ disciplinary backgrounds, responsibili-
ties, and attitudes, but it was discontinued in 2004.

CONCLUSION 3  To monitor the status and quality of undergraduate


science, technology, engineering, and mathematics education, recurring
longitudinal surveys of instructors and students are needed.

The committee found that IPEDS and other federal data sources gen-
erally allow data to be disaggregated by students’ race and ethnicity and
gender. However, conceptions of diversity have broadened to include ad-
ditional characteristics of students that may provide unique strengths in

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

SUMMARY 9

undergraduate STEM education and may also create unique challenges. To


fully support the indicators, federal data systems will need to include ad-
ditional student characteristics.

CONCLUSION 4  To monitor progress toward equity, diversity, and


inclusion of science, technology, engineering, and mathematics students
and instructors, national data systems will need to include demographic
characteristics beyond gender and race and ethnicity, including at least
disability status, first-generation student status, and socioeconomic
status.

Proprietary Data Sources


Many proprietary data sources have emerged over the past two de-
cades in response to growing accountability pressures in higher education.
Although not always nationally representative of all 2- and 4-year public
and private institutions, some of these sources include large samples of
institutions and address the committee’s goals and objectives. They rely on
different types of data, including administrative data (also referred to as
student unit record data) and survey data. In addition, higher education
reform consortia have developed new measures of student progress in un-
dergraduate education that are related to the committee’s goals, objectives,
and indicators. One of particular note is a measure of students’ program of
study selection, which uses data on students’ first-year course enrollments
to identify their intended major. Another measure, transfer rate, captures
students who transfer from 2-year programs into longer duration programs
at their initial or subsequent institution(s). Some of these measures overlap
with the committee’s indicators.

Data for the Indicators


Based on its review of existing data sources, the committee considered
research needs and the availability of data for each of the 21 indicators it
proposes. For some indicators, further research is needed to develop clear
definitions and measurement approaches, and, overall, the availability of
data for the indicators is limited. For other indicators, nationally represen-
tative datasets are available, but when those data are disaggregated, first to
focus on STEM students and then to focus on specific groups of STEM stu-
dents, the sample sizes become too small for statistical significance. For other
indicators, no data are available from either public or proprietary sources.

CONCLUSION 5  The availability of data for the indicators is limited,


and new data collection is needed for many of them:

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

10 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

• No data sources are currently available for most of the indica-


tors of engaging students in evidence-based educational practices
(Goal 1).
• Various data sources are available for most of the indicators of
equity, diversity, and inclusion (Goal 2). However, these sources
would need to include more institutions and students to be
­nationally representative, along with additional data elements on
students’ fields of study.
• Federal data sources are available for some of the indicators of
ensuring adequate numbers of science, technology, engineering,
and mathematics professionals (Goal 3). However, federal sur-
veys would need larger institutional and student samples to allow
finer disaggregation of the data by field of study and demographic
characteristics.

IMPLEMENTING THE INDICATOR SYSTEM


The indicator system’s potential to guide improvement in undergradu-
ate STEM education at the national level can be realized only with new
data collection by federal agencies or other organizations. The committee
identified three options for obtaining the data needed to support the full
suite of 21 indicators.

CONCLUSION 6  Three options would provide the data needed for


the proposed national indicator system:

1. Create a national student unit record data system, supplemented


with expanded surveys of students and instructors (Option 1).
2. Expand current federal institutional surveys, supplemented with
expanded surveys of students and instructors (Option 2).
3. Develop a nationally representative sample of student unit record
data, supplemented with student and instructor data from propri-
etary survey organizations (Option 3).

For Option 1, there are bills pending in Congress to create a national


student unit record data system. Such a system would support many of the
proposed indicators of students’ progress in STEM programs. However, it
would not provide data for the indicators related to instructors, who play a
central role in engaging students in evidence-based educational experiences,
nor on practices and perceptions related to equity, diversity, and inclusion.
Thus, supporting the complete set of indicators in this option (as well as in
the other two options) would also require regular surveys of students and
instructors.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

SUMMARY 11

Option 2 would take advantage of the well-developed system of in-


stitutional surveys currently used to obtain IPEDS data annually from
the vast majority of 2-year and 4-year institutions across the nation. In
this option, these surveys would be supplemented with new measures of
student progress developed by various higher education reform consortia.
The addi­tional IPEDS measures would provide much of the student data
needed for the indicator system, supplemented by data from regular surveys
of students and instructors.
Option 3, which might be carried out by a federal agency or another
entity (e.g., a higher education association) would take advantage of the
rapid growth of data collection and analysis by institutions, state higher
education systems, and education reform consortia across the country.
Many institutions currently provide student unit record data, and new
measures of student progress calculated from these data, to a state data
warehouse and also to one or more education reform consortia databases.
As in Options 1 and 2, additional data from surveys would be needed to
support the indicators. In this case, the federal government or other entity
would contract with one or more of the survey providers to revise their
survey items as needed to align with the committee’s indicators and develop
nationally representative samples of public and private 2-year and 4-year
institutions and STEM students and instructors.
Many of the proposed indicators represent new conceptions of key
elements of undergraduate STEM education to be monitored over time.
Some indicators require research as the first step toward developing clear
definitions, identifying the best measurement methods, and implementing
the indicator system. Following this process, after the system has been
implemented and the indicators are in use, it would be valuable to carry
out an evaluation study to ensure that the indicators measure what they
are intended to measure.  At the same time, the structure of undergraduate
education continues to evolve in response to changing student demograph-
ics, funding sources, the growth of new providers, and potentially disrup-
tive technologies. In light of these changes, it will be important to regularly
review, and revise as necessary, the proposed STEM indicators and the data
and methods for measuring them.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Introduction

S
cience and technology are engines of U.S. economic growth and inter-
national competitiveness in the global 21st century economy. Leading
economists (e.g., Solow, 1957; Mankiw, 2003; Romer, 1990), policy
makers, and the public all agree that technological innovation fueled by
scientific research is the primary mechanism for sustained economic growth
(Xie and Killewald, 2012). As the nation continues to recover from the
2008 economic recession, the science, technology, engineering, and math-
ematics (STEM) fields are critical drivers for the health of the economy.
Hence, a robust, skilled STEM workforce is important for the nation
(National Academy of Sciences, National Academy of Engineering, and In-
stitute of Medicine, 2007, 2010). Because undergraduate STEM education
plays a central role in developing the STEM workforce, and also contrib-
utes to a strong general education for all students, improving the quality of
undergraduate STEM education is a national imperative.
Some recent trends raise concerns about the health of the nation’s
STEM workforce (see Xie and Killewald, 2012). First, scientists’ earnings
(adjusted for inflation) have stagnated since the 1960s and have declined
relative to those of other high-status, high-education professions, such as
law and business, which could discourage individuals from entering or stay-
ing in science careers. Second, it has become more difficult for recent science
doctorates to obtain any academic position, and the available academic
positions are weighted toward more postdoctoral appointments and fewer
faculty positions, which could discourage young people from pursuing
academic research. Third, U.S. science faces increasing foreign competition
as the share of global research conducted in other countries is increasing.

13

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

14 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

These trends could lead to gradual erosion of U.S. dominance in science and
a slowdown in the economic growth fueled by technological innovation.
To strengthen the nation’s research and technology enterprise in the
face of these trends, the President’s Council of Advisors on Science and
Technology (PCAST) (2012) recommended producing 1 million additional
college graduates with degrees in STEM over the following decade. Recog-
nizing that many students with an interest in and aptitude for STEM, espe-
cially females and underrepresented minorities, are not completing degrees
in these fields (see National Research Council, 2011; National Academies
of Sciences, Engineering, and Medicine, 2016a), the PCAST report called
for widespread implementation of strategies to engage, motivate, and retain
diverse students in STEM. Such strategies are beginning to emerge from
a growing body of relevant research, but they have not yet been widely
implemented (see National Research Council, 2012; National Academies
of Sciences, Engineering, and Medicine, 2016a).
Many initiatives to improve the quality of undergraduate STEM educa-
tion are now under way. Some focus on the national level, others involve
multi-institution collaborations, and others take place on individual cam-
puses. For example, the interagency Committee on STEM Education of
the National Science and Technology Council (2013) developed a STEM
education 5-year strategic plan that identified improving the experience of
undergraduate students as a priority goal for federal investment. Within this
broad goal, the strategic plan identified four priority areas: (1) promoting
evidence-based instructional practices; (2) improving STEM experiences in
community colleges; (3) expanding undergraduate research experiences;
and (4) advancing success in the key gateway of introductory mathe­matics.
Other initiatives include the undergraduate STEM initiative of the Asso-
ciation of American Universities;1 a workshop and sourcebook on under­
graduate STEM reform of the Coalition for Reform in Undergraduate
STEM Education;2 and the Partnership for Undergraduate Life Sciences
Education, or PULSE.3
At present, policy makers and the public do not know whether these
various federal, state, and local initiatives are accomplishing their stated
goals and achieving nationwide improvement in undergraduate STEM
education. This is partly due to a lack of high-quality national data on
undergraduate STEM teaching and learning. A recent study of barriers and
opportunities for 2-year and 4-year STEM degrees (National Academies
of Sciences, Engineering, and Medicine, 2016a) highlighted the mismatch
between currently available datasets and the realities of student trajecto-

1 See https://stemedhub.org/groups/aau/about [July 2017].


2 See https://www.aacu.org/pkal/sourcebook [July 2017].
3 See http://www.pulsecommunity.org [July 2017].

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INTRODUCTION 15

ries. Although students today often transfer across institutions, stop in


and out of STEM programs, and attend classes part time, existing surveys
rarely capture these trends. That study committee concluded that existing
data collection systems (national, state, and institutional) were often not
structured to gather the information needed to understand the quality of
undergraduate education.
Anticipating these challenges, the President’s Council of Advisors on
Science and Technology (2012) recommended that the National Academies
of Sciences, Engineering, and Medicine develop metrics to evaluate under-
graduate STEM education.
In response, the National Science Foundation (NSF) charged the N ­ ational
Academies to conduct a consensus study to identify objectives for improving
undergraduate STEM education and to outline a framework and set of indi-
cators to document the status and quality of undergraduate STEM education
at the national level over multiple years: see Box 1-1 for the full study charge.

INTERPRETING THE STUDY CHARGE


As it began its work, the committee identified a number of important
issues in the study charge and within the broader social, political, and his-
torical context that led to this study. In constructing a shared understanding
of these issues, the committee was able to calibrate its interpretations of
key terms and phrases in the charge as well as other terms and phrases that
emerged over the course of its deliberations. Throughout this chapter, the
committee identifies the issues emerging in its interpretation of the study
charge. For each one, it shares definitions of terms and, as appropriate,
discusses the context in order to help ground the committee’s proposed
indicator system. Underlying this work is the committee’s vision for under-
graduate STEM education.

Vision
In developing a conceptual framework and indicators to monitor im-
provement in undergraduate STEM education, the committee envisioned
what such improvement would look like. In this vision, students—from all
walks of life and with all types of experiences and backgrounds—would be
well prepared to help address global, societal, economic, and technological
challenges. Students would have the STEM background to become success-
ful in the careers of today as well as those of tomorrow as U.S. society con-
tinues to become increasingly diverse, global, and interconnected. Among
these well-prepared graduates, some would become professional scientists
and engineers, conducting research and developing new technologies to
support sustained economic growth.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

16 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

BOX 1-1
Study Charge

An ad hoc committee will conduct a study to identify objectives for improv-


ing undergraduate science, technology, engineering, and mathematics (STEM)
education and to outline a framework and set of indicators intended to document
the status and quality of undergraduate STEM education at the national level over
multiple years. The committee’s work will progress in two phases, with an interim
report after Phase I and a final report at the conclusion of Phase II. The commit-
tee will use the strategic objectives of the federal STEM education strategic plan
as a starting point, but will also consider whether additional objectives need to
be tracked in order to determine the status of undergraduate STEM education
over time. At the National Science Foundation’s request, the study will focus in
particular on the first 2 years of undergraduate education.
In Phase I, the committee will

• identify objectives for improving undergraduate STEM education at both


2-year and 4-year institutions building from the objectives for higher
education outlined in the federal strategic plan to coordinate federal
investments in STEM education and emphasizing the first 2 years of
undergraduate education;
• review existing systems for monitoring undergraduate STEM education;
and
• develop a conceptual framework for the indicator system.

In Phase II of the study, the committee will

• d evelop a set of indicators that are linked to the objectives identified in


Phase I;
• identify existing and additional measures needed for tracking progress
toward the objectives identified in Phase I;
• discuss the feasibility of including such measures in existing programs of
data collection;
• identify additional research that would be needed to fully develop the
indicators needed to track progress toward the objectives developed in
Phase I; and
• make recommendations regarding the roles of various federal and state
institutions in supporting the needed research and data collection for an
evaluation of progress.

At the same time, the committee envisions that all students, not merely
those who pursue STEM degrees and careers, would have both access and
exposure to high-quality STEM education to support the development
of STEM literacy, a relatively new concept. The committee’s adoption
of this concept was informed by a recent report (National Academies of

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INTRODUCTION 17

Sciences, Engineering, and Medicine, 2016b) that identified several aspects


of science literacy: (1) the understanding of scientific practices, such as the
formulation and testing of hypotheses; (2) content knowledge, including
concepts and vocabulary; and (3) understanding of science as a social pro-
cess, for example, the role of peer review. In keeping with these expanding
definitions of science literacy, the committee recognized that “engaging in
science—whether using knowledge or creating it—necessitates some level of
familiarity with the enterprise and practice of science” (National Academies
of Sciences, Engineering, and Medicine, 2016b, p. 11). Students’ ability to
use STEM knowledge (for example, as citizens) or create STEM knowl-
edge, then, will require an analogous familiarity with the enterprises and
practices of science and technology and engineering and mathematics. The
application of the term “literacy” to these disciplines “signifies something
like ‘knowledge, skills, and fluency’ within [these] particular domain[s]”
(National Academies of Sciences, Engineering, and Medicine, 2016b, p. 17).
In the committee’s vision, STEM literacy, along with lifelong interest in
STEM, is important for all graduates, regardless of their field of study, both
in the workplace and outside of work. In the workplace, STEM knowledge
and skills, such as those encompassed within the concept of STEM literacy,
are useful across a range of occupations beyond those of scientists and
engineers (Carnevale, Smith, and Melton, 2011; National Science Founda-
tion, 2014). Outside of work, people can draw on their science literacy for
making decisions that could involve science (e.g., decisions about personal
health or voting on environmental issues), although such knowledge is
only one of many factors contributing to their decision making (National
Academies of Sciences, Engineering, and Medicine, 2016b). Given that ac-
cess to high-quality science learning experiences can facilitate and support
the development of science knowledge bases, the committee expects that
analogous exposure to high-quality STEM learning experiences in under-
graduate education will facilitate and support the development of STEM
knowledge bases that graduates may choose to draw on when making im-
portant life decisions. However, the committee did not reach consensus on
exactly what level of exposure (e.g., completion of a certain number or type
of courses or learning experiences outside the classroom) would support
STEM literacy. For some students, STEM literacy might require exposure
to adult basic education, adult literacy, or vocational programs offered in
the community. However, measures of such programs would fall outside
of the committee’s charge to develop indicators of the status and quality of
undergraduate STEM education.
The committee envisions that exposure to STEM concepts and processes
can help individuals to make sense of the world around them, enabling
the skills and dispositions needed to participate actively in a democracy.
Because of its commitment to these values, the committee sees the goals

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

18 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

for undergraduate education presented later in this chapter as working in


service of ensuring STEM literacy for all undergraduates, not just those
majoring in STEM fields.
Given the increasingly diverse and global nature of U.S. society, the
committee envisions that STEM education will embrace approaches that
increase representation of diverse populations in STEM careers. All under-
graduate institutions will provide equitable STEM educational practices,
both inside and outside the classroom (curricular and co-curricular), ensur-
ing that all students have the opportunities and support they need to reach
their potential. In addition, instructors, staff, and administrators will have
the knowledge, skills, and understanding of evidence-based teaching and
learning methods to deliver a 21st century, inclusive STEM curriculum and
co-curriculum, and students will have clear pathways into and through
STEM programs of learning.

A Focus on the National Level


Given its charge, the committee’s work focused on national-level indi-
cators. As noted above, many initiatives are currently under way to monitor
and improve undergraduate STEM education. However, these initiatives
tend to gather detailed, local data that are appropriate for ­local feed-
back and improvement; they are not appropriate for representing STEM
education phenomena broadly and nationally across 2-year and 4-year
institutions.
For example, the PULSE vision and change rubrics are designed to eval-
uate life science departments’ progress toward specified reform principles
(American Association for the Advancement of Science, 2011). Department-
level leaders (current or former department chairs or deans) voluntarily use
the rubrics for self-study and improvement in terms of 66 different crite-
ria across five areas: curriculum alignment, assessment, faculty practice/
faculty support, infrastructure, and climate for change. Analyzing rubric
data from a sample of life sciences departments,4 Brancaccio-Taras and
colleagues (2016) concluded that the rubrics constitute a valid and reliable
instrument for evaluating departmental change across different institution
types. The authors also concluded that their analysis of rubric data from
the 26 institutions that responded to all five groups of rubrics provided
“baseline knowledge and insights about the state of the adoption of the

4 The authors invited all members of the PULSE community (which includes 2-year and 4-year

colleges, regional comprehensive universities, and research universities) to submit their rubric
data. The respondents provided varying amounts of data: 26 institutions provided r­ ubric data
across all five areas, 57 provided data on curriculum alignment, 35 on assessment, 49 on faculty
practice/faculty support, 28 on infrastructure, and 32 on climate for change.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INTRODUCTION 19

recommendations of the Vision and Change report” (Brancaccio-Taras et


al., 2016, p. 11). (The authors are currently working to gather data from a
larger number of departments.) Although the committee views the currently
available baseline data as a valuable snapshot of quality improvement in
some life sciences departments, these data are not nationally representative,
do not include other disciplines, and do not constitute national indicators
of improvement in undergraduate STEM.
The national-level data needed for the committee’s proposed indicators
differ from the PULSE rubrics or other fine-grained data designed to guide
local improvement efforts. In K–12 education, for example, teacher ob-
servation instruments can be designed to gather detailed data that provide
very specific feedback to teachers for improving their practices. However,
because schools and districts seek instruments that can provide a picture
of teachers’ work across multiple content areas and grade levels, they often
design more “course-grained” observation instruments to provide more
global data (Hill and Grossman, 2013). One result is that three-fourths of
the 15,000 teachers responding to a recent survey indicated that their most
recent evaluations failed to identify areas for improvement (Weisberg et al.,
2009, cited in Hill and Grossman, 2013). The granularity of data matters,
and the global data needed for national indicators may not be useful for
informing improvement by individual instructors, STEM departments, or
STEM programs (Wilson and Anagnostopolous, in press).

Equity, Diversity, and Inclusion


Given the national need for a robust supply of STEM professionals for
technological innovation and sustained economic growth, another impor-
tant dimension of this study is the underrepresentation of certain groups in
STEM, involving issues of equity, diversity, and inclusion. Equity refers to
the fair distribution of opportunities to participate and succeed in education
for all students. Diversity focuses on the differences among individuals, in-
cluding demographic differences such as gender, race, ethnicity, and country
of origin. Inclusion refers to the processes through which all students are
made to feel welcome and are treated as motivated learners.
The committee views equity as a central element of quality in under-
graduate STEM education and considers measurement of equity as essential
to measuring and improving quality. From an accountability perspective,
equity in undergraduate STEM education is the achievement of propor-
tional representation of all demographic groups in terms of access, reten-
tion, degree completion, and participation in enriching STEM educational
programs, experiences, and activities that prepare students to enter the
STEM workforce. The committee uses the word demographics broadly
to capture the full spectrum of diversity in the population. Such diversity

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

20 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

includes, but is not limited to, diversity in socioeconomic level, gender


identity, race and ethnicity, religion, first-generation in college, marital and
­parental status, veteran status, disability status, and age. The National
Science Foundation and the National Center for Science and Engineering
Statistics (2017) has identified certain demographic groups as “under­
represented” because their representation in STEM education and employ-
ment is not proportional to their representation in the national population.
These underrepresented groups include persons with disabilities, women
in some STEM disciplines, and three racial and ethnic minority groups:
Hispanics, Blacks, and American Indians.

Goals and Objectives


The committee defines a goal as an intended outcome representing im-
provement in undergraduate STEM education. It is stated in general terms
and covers a long time frame. Because achieving long-term goals involves
the work of multiple stakeholders at different levels in the undergraduate
education system (e.g., classroom, institution, state higher education sys-
tem), the committee’s goals are stated as action verbs but do not identify
a specific actor as the subject. The committee defines objectives as more
specific and measurable steps toward achieving goals.

Measures and Indicators


The committee defines a measure as a value that is quantified against a
standard at a specified time. An indicator is a specific type of measure that
provides evidence that a certain condition exists or that specifies how well
certain results or objectives have or have not been achieved (Brizius and
Campbell, 1991). It goes beyond raw statistics or data to provide easily
understandable information that can be used to guide educational policy
and practice (National Research Council, 2014).
After considering various definitions of an educational indicator (e.g.,
Oakes, 1986), the committee adopted Planty and Carlson’s (2010) con-
ceptualization of an educational indicator and an indicator system. In this
definition, an educational indicator has three key characteristics. First, it
attempts to represent the status of a specific condition or phenomenon.
For example, it may measure student achievement, dropout rates, crime in
schools, or another aspect of education. Second, it typically is quantitative
(Planty and Carlson, 2010, p. 4):
Indicators are created from data (i.e., observations collected in numeri-
cal form) and presented in the form of numbers and statistics. Statistics
are numerical facts, but by themselves are not indicators. . . . Indicators

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INTRODUCTION 21

combine statistics with purpose, meaning, and context to provide useful


information about a condition or phenomenon of interest.

Third, an indicator has a temporal component, meaning that it might not


only indicate the status of a condition or phenomenon at a given time, but
also can represent change in the condition over time.
An educational indicator may be either a single statistic or a compos-
ite statistic. Single-statistic indicators measure a specific condition of an
education system (e.g., an institution’s student enrollment and number of
Pell grant recipients). Composite indicators combine single statistics to
depict a relationship between two or more aspects of the education system;
examples include student-faculty ratio and student readiness (Planty and
Carlson, 2010).
Because a lone indicator rarely provides useful information about com-
plex conditions or phenomena, indicator systems are designed to generate
more comprehensive and, therefore, more useful information about condi-
tions. More than just a collection of indicator statistics, an educational
indicator system measures the system’s inputs, processes, and outputs.
In education, such inputs may include fiscal and material resources, in-
structor quality, and student background. Processes may include instruc-
tional quality and institutional context and structure, while outputs may
­include student achievement, participation, and attitudes and aspirations
(Shavelson, McDonnell, and Oakes, 1989; Odden, 1990, pp. 24–25). A
high-quality indicator system not only measures these individual compo-
nents, but also suggests how they work together to produce an overall
effect.
In the following chapters of this report, the committee proposes an
educational indicator system reflecting this definition. The system is based
on a conceptual framework that views undergraduate STEM education as a
system and considers how the components (inputs, processes, the environ-
ment) work together to produce an effect on the desired student outcomes.
The committee emphasizes that its proposed indicators should be viewed
in concert with one another to provide insight into the overall quality of
undergraduate STEM education (see National Research Council, 2014).
Though individual indicators can provide discrete markers of progress
toward improvement in undergraduate STEM, the committee also recog-
nizes that telling an “end-to-end story” requires the creation of a coherent
framework demonstrating how relationships across indicators facilitate
achieving the identified objectives. This report provides such a framework,
identifying overarching and mutually reinforcing goals that are aligned with
more specific objectives for improving undergraduate STEM education,
along with a set of indicators to monitor progress toward the objectives.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

22 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Undergraduate STEM Education


For the purposes of this report, the committee defines undergraduate
STEM education as undergraduate education in the natural and social sci-
ences, technology, engineering, and mathematics. This definition follows the
National Science Foundation (2016) definition of the STEM fields, which in-
cludes the social sciences. In keeping with the committee’s charge to develop
indicators for 2-year and 4-year STEM programs, the definition encom-
passes programs of study leading to bachelor’s degrees, associate’s d ­ egrees,
and certificates at all types of public, private, for-profit, and nonprofit insti-
tutions of higher education. This definition also includes workforce devel­
opment programs that prepare students for “middle-skill” jobs, defined as
those jobs that require less educational preparation than a bachelor’s degree,
but some education or training beyond a high school degree (Holzer and
Lerman, 2007; Rothwell, 2013). Finally, the committee’s definition includes
introductory STEM courses that students may take as part of general educa-
tion requirements, regardless of their major field of study.

Evidence-Based STEM Educational Practices and Programs


In the committee’s view, improving the quality of undergraduate
STEM education will require wider use of “evidence-based STEM educa-
tional practices and programs.” A growing body of research (see National
Research Council, 2012, and National Academies of Sciences, Engineering,
and Medicine, 2016a) has begun to identify effective teaching practices and
co-curricular programs that support students’ mastery of STEM concepts
and skills and their retention in STEM programs. Based on this research,
the committee defines evidence-based STEM educational practices and pro-
grams as those meeting at least one of the following criteria:

• the preponderance of published literature suggests that it will be


effective across settings or in the specific local setting, or
• the practice is built explicitly from accepted theories of teaching
and learning and is faithful to best practices of implementation, or
• locally collected, valid, and reliable evidence, based on a sound
methodological research approach, suggests that the practice is
effective.

MEASURING COLLEGE QUALITY IN AN


ERA OF ACCOUNTABILITY
The committee defines improvement as progress toward the commit-
tee’s vision, goals, and objectives for undergraduate STEM education (see

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INTRODUCTION 23

Chapter 2). To define quality, the committee adapted a definition of higher


education quality from Matsudaira (2015), which, in turn, reflects earlier
work on quality improvement in health care delivery (Institute of Medi-
cine, 2001). Specifically, the committee defines quality in higher education
as the degree to which exposure to STEM educational offerings increases
the likelihood of desired educational outcomes. This definition focuses
on the causal effect that exposure to some STEM educational experience
(e.g., the physics program at college A; an innovative remedial mathematics
course at college B) has on advancing valued outcomes (Matsudaira, 2015;
see also Matchett, Dahlberg, and Rudin, 2016). In this case, the valued
outcomes of undergraduate STEM include mastery of STEM concepts and
skills and attainment of STEM credentials.
In developing this definition of quality, the committee considered the
larger social and political context. Responding to soaring college costs, high
attrition rates, and rising student debt, parents, employers, policy makers
and taxpayers are asking questions about the quality of higher education:
Are students learning? Are graduates earning? What is the value of higher
education? Each of these questions is complex, and the committee acknowl-
edges that they cannot be answered with single indicators. Nevertheless, the
goals, objectives, and indicators presented later in this report contribute to
answering these questions.

Employment Outcomes
In response to growing calls for accountability, one of the most widely
used methods for measuring the quality or “value” of a college or university
is to assemble and analyze data on graduates’ earnings. However, research
has demonstrated that both graduation rates and postgraduation earnings
vary widely, depending on the type and selectivity of the institution and the
characteristics of incoming students (National Academies of Sciences, Engi­
neering, and Medicine, 2016a; Matsudaira, 2015). Although economists
are beginning to develop methods to adjust graduates’ earnings to account
for the characteristics of incoming students, these methods are not yet fully
developed, and further research would be needed to develop uniform qual-
ity measures (Matsudaira, 2015).
In addition, graduates’ earning are influenced by labor market demand,
and the wage premium for STEM graduates varies by time, place, and field
in ways that are characteristic of a market economy. Furthermore, many
STEM majors enter occupations that are not traditionally considered part
of the STEM workforce (National Science Foundation, 2016), but their
STEM knowledge may indeed contribute to their earnings (Carnevale,
Smith, and Melton, 2011): It is not practical to precisely identify these
workers and measure this contribution. Moreover, individuals who have

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

24 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

received STEM education may possess unobserved attributes that make


them incomparable to their peers without STEM education. For all of
these reasons, some experts and leaders in higher education agree that
postgraduation earnings alone are not a suitable measure of institutional
quality (e.g., Matchett, Dahlberg, and Rudin, 2016). Nevertheless, many
states have already implemented performance-based funding systems that
reward institutions on the basis of the average earnings of graduates.
Another, related method for measuring an institution’s “value” focuses
on the extent to which graduates find jobs related to their chosen fields of
study. This method, however, does not fully address the measurement chal-
lenges described above. U.S. colleges and universities intentionally o ­ ffer a
variety of majors to meet students’ varying interests and demands. Some
STEM majors (such as engineering and nursing) are more closely tied to
future occupations than others (such as social science and biology). Thus,
across STEM fields, there is a large variation in the flow of students from
STEM majors to STEM occupations (Xie and Killewald, 2012), and STEM
education can be a good preparation for non-STEM careers: see Box 1-2.
Labor market demand, external to higher education, influences whether
STEM graduates find any type of job, a job in their specific STEM dis-
cipline, or a job in another STEM discipline. A graduate’s employment
prospects are also influenced by the geographical location and movement
of companies and academic research organizations, changes in domination
of certain industries and technologies, and shifting demand for different
skills as new technologies emerge and others become obsolete. We know
that rates of employment and unemployment vary by discipline. In 2013,
most STEM professionals (93.3%) were employed in their field of study,
with only 6.7 percent reporting that they worked in a job outside the field
of their highest degree because a job in their field was not available: See
Table 1-1 (National Science Foundation, 2016). Within this average, how-
ever, the rate of working outside one’s field of study varied by discipline,
with higher rates in the social, life, and physical sciences and lower rates in
engineering and computer and mathematical sciences.
In the committee’s view, care should be taken in interpreting these dif-
ferent job placement rates. They may reflect differences in employers’ skill
demands, the intended purpose of the students’ majors, or students’ selec-
tions of fields. Thus, one should not attribute the observed differences in
job placement rates simply to differences in the overall national quality of
the undergraduate education programs in each discipline. Even when STEM
majors enter occupations outside the STEM workforce, the knowledge and
skills they developed in their undergraduate STEM programs contribute to
the national economy (see “Vision,” above). In light of these complexities,
the committee does not propose any overarching goals, more specific objec-
tives, or indicators related to placement in STEM jobs.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INTRODUCTION 25

The STEM Workforce


The committee defines the STEM workforce to include science and
engineering occupations (currently about 5.7 million people) and sci-
ence and engineering-related occupations (currently about 7.4 million
people; National Science Foundation, 2016). These two groups of oc-
cupations have been carefully defined and studied by the National Sci-
ence F ­ oundation (2014, 2015, and 2016). In this definition (National
Science Foundation, 2016), the science and engineering occupations in-
clude computer and mathematical scientists; biological, agricultural, and
environmental life scientists; physical scientists (e.g., physicists, chemists,
geoscientists); social scientists (e.g., psychologists, economists, sociologists);
engineers; and postsecondary teachers in science and engineering fields. The
science- and e­ ngineering-related occupations include health care workers
(e.g., physicians, audiologists, nurses); science and engineering managers
(e.g., engi­neering managers, natural and social science managers); science
and ­engineering precollege teachers (e.g., K–12 science teachers); tech-
nologists and technicians in science and engineering; and other science and
engineering-related occupations (e.g., actuaries, architects).
The committee notes that, although the concept of a “STEM work-
force” is widely used and has been referenced in law, there is no consensus
on how it is defined. Various reports use different definitions, leading to
divergent and sometimes conflicting conclusions about the size and other
characteristics of the STEM workforce: see Box 1-2. Furthermore, the
STEM workforce is heterogeneous; it is composed of many different “sub-
workforces” that can be characterized by field of degree, occupational field,
the education level required, or some combination of these elements.
Several approaches have been proposed in recent years to define and
measure the STEM workforce or the science and engineering workforce.
One approach simply counts any job held by an individual with at least a
bachelor’s degree in science or engineering as part of the science and engi-
neering workforce; with this approach, the workforce totals 19.5 million
people (National Science Foundation, 2014). In another approach, based
on surveys of college graduates about their job requirements, 16.5 million
people indicated that their position required a bachelor’s-level degree of
science and engineering knowledge (National Science Foundation, 2014).
In yet another approach, Rothwell (2013) analyzed data on skill and
knowledge requirements from the Occupational Information Network
(O*NET) national database and found that 26 million jobs required
significant STEM expertise. There is little agreement across these vari-
ous approaches. Given this lack of consensus, the committee followed
the National Science Foundation’s approach, the first one above, defin-
ing the STEM workforce to include science and engineering occupations

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

26 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

BOX 1-2
Debates about Supply and Demand for Science, Engineering,
Technology, and Mathematics (STEM) Professionals

Since the 1950s, scientists, engineers, employers, and policy makers have
periodically raised alarms about the possibility of impending shortfalls in the sup-
ply of STEM professionals. One of the most visible reports, Rising Above the
Gathering Storm ((National Academy of Sciences, National Academy of Engineer-
ing, and Institute of Medicine, 2007), conveyed deep concern about the future
supply of U.S. scientists and engineers at a time when other nations are rapidly
advancing in science and innovation. Lowell and Salzman (2007) disputed these
arguments, finding that the U.S. supply of graduates in science and engineering
was not only large, but also considered among the best in the world. Based on
their analysis of available data, they also argued that the U.S. education system
was producing far more science and engineering graduates than needed for the
available job openings. Studies by economists have also found little evidence of
a market shortage of scientists (see, e.g., Butz et al., 2003).
More recently, Xie and Killewald (2012) conducted a detailed analysis of
multiple datasets, finding little evidence of either an oversupply or a shortage of
U.S. scientists and engineers. They found that over the past four decades the
number of graduates in the biological and physical sciences, mathematics, and
engineering had grown, although at a slow rate. Contrary to the claims of an
oversupply, most of these graduates, especially bachelor’s and master’s degree
recipients, found jobs related to their training in these fields.1 However, the real
earnings of basic scientists generally declined over the same period, challeng-
ing the claims of a market shortage of basic scientists (a shortage should have
resulted in rising wages).
For more than two decades, the unemployment rate in STEM occupa-
tions has been considerably lower than the general unemployment rate (Na-
tional Science Foundation, 2016), a reflection of sustained demand for STEM
professionals. In 2013, a survey of STEM professionals with at least a 4-year de-
gree in a STEM field found that most (96.2%) were currently employed (National
Science Foundation, 2016). Furthermore, studies have found that individuals who
earned long-term certificates2 in STEM-intensive fields, such as health, nursing,
and transportation, gained positive economic returns on their educational invest-
ments in the form of higher wages and lower unemployment (Dadgar and Weiss,
2012; Stevens, Kurlaender, and Grosz, 2015). These recent data support Xie and
Killewald’s (2012) view that the supply of and demand for STEM professionals
overall is roughly in balance, with neither an oversupply nor a shortage.

(currently about 5.7 million people) and science- and engineering-related


occupations (currently about 7.4 million people; National Science Foun-
dation, 2016).

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INTRODUCTION 27

However, demand for STEM graduates is not monolithic, varying in response


to market forces and business cycles. For example, unemployment among STEM
professionals with at least a 4-year degree spiked in the recessions of the early
and late 2000s; and in 2014, the unemployment rate remained higher, at 3.4 per-
cent, than the pre-recession rate of 2.2 percent in 2006 (National Science Founda-
tion, 2016). In addition, the rates of both unemployment and employment within
one’s field of study vary across the different STEM disciplines: see Table 1-1. In
a 2013 survey, STEM professionals with at least a 4-year degree were asked if
they were working outside the field of their highest degree because a job in that
field was not available. On average across all disciplines, 6.7 percent of gradu-
ates reported working outside their field of study, but the rate varied in different
disciplines (National Science Foundation, 2016, Tbl. 1B-1).

TABLE 1-1  Rates of Unemployment and Working Outside One’s Field


of Study, 2013
Percentage
Percentage Working Outside
Discipline Unemployed Field of Study
Life Sciences 3.3 9.4
Computer and Mathematical Sciences 3.1 4.1
Physical Sciences 4.5 8.3
Social Sciences 3.3 11.8
Engineering 2.5 4.6
All STEM Disciplines 3.9 6.7
SOURCE: National Science Foundation (2016, App. Tbls. 3-8 and 3-12).

1In their analysis, Xie and Killewald (2012) did not include graduates in the social sciences.

They note that social science graduates were less likely to be employed in jobs related to
their training than were their peers in the biological and physical sciences, mathematics, and
engineering.
2Long-term certificates are generally defined as those earned in educational programs

lasting at least 1 year.

Learning Outcomes
In Chapter 2, the committee identifies increasing students’ mastery of
STEM concepts and skills as one of three overarching goals for improving
the quality of undergraduate STEM education. However, the committee

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

28 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

does not propose any indicators that would directly measure student learn-
ing because of the complexities discussed here.
There is no simple way to address questions about whether students
are acquiring the STEM concepts, skills, and abilities that will serve them
for their lives after college, for several reasons. First, expectations for the
holder of an associate’s degree or certificate are different than those for
the holder of a baccalaureate degree. Second, faculty, employers, profes-
sional societies, accreditation agencies, testing companies, and curriculum
committees all have different answers about the ideal and acceptable levels
of proficiency, and, more fundamentally, about what concepts and skills
should be measured for proficiency. These groups have launched a variety
of efforts to define proficiency, some of which focus on core knowledge
and skills for all 2-year and 4-year graduates, across all fields of study (e.g.,
Asso­ciation of American Colleges & Universities, 2007; Lumina Founda-
tion, 2015), while others focus on specific disciplines (e.g., Arum, Roksa,
and Cook, 2016). Leaders in life sciences education, for example, have iden-
tified core concepts, competencies, and disciplinary practices for “biological
literacy” in undergraduate biology (Brewer and Smith, 2011).
Third, the STEM disciplines are characterized by rapid discoveries and
the ongoing development of new knowledge and skills. Within and across
these disciplines, new subdisciplines and interdisciplinary fields are continu-
ally being created, bringing differing views about the core knowledge and
skills that define successful learning. Fourth, in U.S. higher education, there
have never been national tests, graduation standards, or uniform STEM
curricula. These would be incompatible with the tradition of state and
system-level autonomy in public higher education and with the diversity
of public, private nonprofit, and private for-profit institutions that provide
undergraduate STEM education. And fifth, some learning outcomes are
ways of seeing problems, analyzing them, and solving them, with appropri-
ate tools and with collaborations among diverse groups (see, e.g., Associa-
tion of American Colleges & Universities, 2007). These outcomes are not
knowledge about specific content areas, nor can they be easily translated
to a national-level measure.
Because of these complications and practical difficulties, the committee
does not propose any indicators that would directly measure student learn-
ing. However, the committee does target increased acquisition of STEM
concepts and skills as an overarching goal for improving undergraduate
STEM education (see Chapters 2 and 3).
In the future, with the growth of online instruction and assessment,
more detailed, automated, proficiency exams and fine-grained records of
accomplishment may be available: see Box 1-3. At that time, it will be im-
portant to revisit the conceptual framework and indicators proposed in this

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INTRODUCTION 29

BOX 1-3
The Potential of Online Education

The committee’s study was conducted at a time of rapid growth in distance


and online higher education, fueled by many of the same communications tech-
nologies that enable the globalization of STEM. Some experts envision that
inexpensive, widely accessible online courses and programs will lower the costs
and improve the quality of undergraduate education. For example, according to
Bonvillian and Singer (2013, p. 23):

The idea has been growing that universities will change dramatically, and perhaps
largely fade away, under the spread of online education increasingly enabled by
improvements in broadband Internet access and new mobile devices. Recent years
have also seen advances in the science of learning that are enabling society and
researchers to look at new education approaches. The accumulating evidence chal-
lenges the model that has long dominated higher education: the sage on the stage;
that is, the lecture.

To date, however, there is little evidence that purely online education is as effec-
tive for supporting student learning of STEM concepts and processes as face-to-
face education.
Distance education or distance learning is defined as the education of stu-
dents independent of physical presence in a traditional classroom or campus set-
ting (Maeroff, 2003). Over the past two decades, distance education (i.e., online
courses and online degree programs) has increased in representation within
the undergraduate education landscape (Radford, 2011; Ginder and Stearns,
2014). This growth has been significantly influenced not only by the exponential
rise in the development and use of digital communication tools, but also by four
more specific factors: (1) to meet students’ demands for flexibility, (2) to widen
access to disadvantaged students, (3) to increase course availability, and (4) to
increase student enrollment (Parsad and Lewis, 2008). Another attractive aspect
of distance education is students’ ability to take courses across state lines without
paying out-of-state tuition, which is generally higher than in-state tuition: in 2012,
41 percent of undergraduates and 55 percent of graduate students participating in
distance education were enrolled in institutions outside of their state of residence
(Ginder and Stearns, 2014).
Undergraduate enrollment in distance education courses varies by field
of study. In 2008, students studying computer science made up the highest
share (27%) of all participants in distance education classes (Radford, 2011).
By contrast, engineering students (16%) and natural science, mathematics, and
agriculture students combined (14%) constituted the smallest shares among all
participants (Radford, 2011). The most recent data from 2012 showed that tradi-
tional classroom instruction was dominant for all STEM disciplines except com-
puter science (Snyder, de Brey, and Dillow, 2016). Thus, although the frequency
of technology-based course instruction is becoming comparable to face-to-face
instruction in some undergraduate STEM fields, the extent to which it will be ad-
opted across all disciplines is unknown.

continued

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

30 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

BOX 1-3  Continued

Entire educational programs are now available through distance education.


As of 2007, 32 percent of all 2-year and 4-year institutions were offering college-
level degree or certificate programs designed to be completed entirely through
distance education (Parsad and Lewis, 2008); these programs relied most heavily
on asynchronous (not simultaneous or in real-time) Internet-based communica-
tions technology. However, research to date suggests that asynchronous online
communication, by itself, has limited effectiveness for supporting STEM learning.
A review of 13 studies that assessed learning outcomes in undergraduate
biology suggests that combining asynchronous online and synchronous face-to-
face communication is a key factor for effective learning (Biel and Brame, 2016).
Across the 13 studies, 9 found no significant difference in learning between the
students in face-to-face and online instruction, 2 found that students receiving
face-to-face instruction outperformed those receiving online instruction, and 2
found that online students performed better than those in face-to-face classrooms.
Biel and Brame (2016) concluded that well-designed online biology courses can
be at least as effective for student learning as face-to-face courses and called for
further research to illuminate the specific course elements and structures that can
maximize students’ learning of biology skills and concepts. Separately, Bowen and
colleagues (2012) compared the effectiveness of hybrid (including both face-to-
face and online interactions) and traditional formats for a statistics course offered
at six public university campuses. Using a randomized controlled study design,
they found no statistically significant differences in learning outcomes between
students in the traditional and hybrid-format sections.
One type of purely online education, massive open online courses (MOOCs),
has grown rapidly since 2012, but students frequently drop out of these courses,
possibly due to the lack of synchronous communication. Completion rates in
MOOCs have generally been below 10 percent, and only five MOOCs have been
recognized as credit-worthy by the American Council on Education (Kolowich and
Newman, 2013; Ho et al., 2014).

report. The committee envisions that its recommended indicator system will
undergo continuous improvement and updating (see Chapter 7).

Goals of the Indicator System


In light of the pressures for accountability and the complexity of mea-
suring quality described above, the committee stresses that the primary goal
of the indicator system is to allow federal agencies to monitor the status
and quality of undergraduate STEM education over time (based on data
aggregated from individual institutions). For such monitoring, the proposed
indicator system can use data from nationally representative samples of

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INTRODUCTION 31

institutions and students (see Chapters 6 and 7); in contrast, a system


designed for accountability or ranking would require data from the entire
universe of 2-year and 4-year institutions.
The committee envisions that institutional data collected by federal
agencies to inform the national indicators could also be accessible to, and
used by, individual institutions, state higher education systems, or consortia
of institutions to monitor their own programs over time for the purpose
of improvement. Although such accessibility might also allow institutions,
consortia, states, or individuals to compare and rate institutions with the
goal of holding them accountable, that is not the intended purpose of the
indicator system. Rather, the committee expects that the indicator system
will be used by the National Science Foundation and other federal agencies
to monitor nationwide progress toward improving the quality of STEM un-
dergraduate education. The committee also anticipates that the interagency
committee on STEM education will use the indicator system as it works to
advance the objectives for undergraduate STEM education identified in the
federal STEM education 5-year strategic plan of the National Science and
Technology Council.

STUDY APPROACH AND ORGANIZATION OF THE REPORT


To address its charge, the committee met seven times over the course
of the study. The meetings were organized to allow the committee to con-
sider the testimony of expert presenters, as well as privately deliberate
the weight of existing evidence. In Phase I, the committee gathered and
reviewed a wide catalog of literature on discipline-based education research
and change strategies for improving the quality of undergraduate STEM. It
also obtained information on existing systems for monitoring the quality of
undergraduate education, both generally and in STEM specifically by invit-
ing outside experts to speak to the committee and convening a workshop
in February 2016 (see Appendix C). Drawing on these sources of evidence
and its own expert judgment, the committee developed a draft report with
a conceptual framework of goals and objectives for undergraduate STEM,
releasing it for public comment in August 2016. In addition to soliciting
comments and feedback online, the committee convened a public workshop
in October 2016 to obtain further input (see Appendix A for a summary of
comments received and committee responses). In Phase II of the study, the
committee met four times in closed session to deliberate on the public com-
ments, revise the framework of goals and objectives, and develop indicators
of progress toward the objectives.
Based on the vision presented above, and its deliberations about the
preliminary conceptual framework created in Phase I, the committee iden-
tified three overarching goals for STEM education: (1) increase students’

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

32 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

mastery of STEM concepts and skills; (2) strive for equity, diversity, and
inclusion; and (3) ensure adequate numbers of STEM professionals. These
three goals are discussed in greater detail in Chapter 2. In Phase II, the com-
mittee also reviewed additional literature as it deliberated on the proposed
indicators and developed conclusions and recommendations for research
and data collection to develop the indicator system. Throughout the study
process, committee members drafted sections of text, which were shared,
reviewed, edited, and revised across members of the entire committee.
The report is organized around the major tasks outlined in the com-
mittee’s charge. Chapter 2 presents the conceptual framework for the indi-
cator system; Chapters 3, 4, and 5 discuss the committee’s three goals for
improvement in undergraduate STEM, along with objectives and indicators
to measure progress toward those goals. Chapter 6 reviews existing moni-
toring systems and data sources related to undergraduate STEM educa-
tion, and Chapter 7 discusses alternative approaches to implementing the
indicator system.

REFERENCES
American Association for the Advancement of Science. (2011). Vision and Change in Un-
dergraduate Biology Education: A Call to Action. Washington, DC: Author. Available:
http://visionandchange.org/finalreport [July 2017].
Arum, R., Roksa, J., and Cook, A. (2016). Improving Quality in American Higher Education:
Learning Outcomes and Assessments for the 21st Century. Hoboken, NJ: John Wiley
& Sons.
Association of American Colleges & Universities. (2007). College Learning for the New
Global Century. Washington, DC: Author. Available: https://www.aacu.org/sites/default/
files/files/LEAP/GlobalCentury_final.pdf [July 2017].
Biel, R., and Brame, C. (2016). Traditional versus online biology courses: Connecting course
design and student learning in an online setting. Journal of Microbiology & Biology
Education, 17, 417–422.
Bonvillian, W.B., and Singer, S.R. (2013). The online challenge to higher education. Issues in
Science and Technology, 29(4). Available: http://issues.org/29-4/the-online-challenge-to-
higher-education [October 2017].
Bowen, W.G., Chingos, M.M., Lack, K.A. and Nygren, T.I. (2012). Interactive Learning
Online at Public Universities: Evidence from Randomized Trials. Available: http://www.
sr.ithaka.org/wp-content/uploads/2015/08/sr-ithaka-interactive-learning-online-at-public-
universities.pdf [October 2017].
Brancaccio-Taras, L., Pape-Lindstrom, P., Peteroy-Kelly, M., Aguirre, K., Awong-Taylor, J.,
Balser, R., Cahill, M.J., Frey, R.G., Jack, R., Kelrick, M., Marley, K., Miller, K.G.,
Osgood, M., Romano, S., Uzman, J.A., and Zhao, J. (2016). The PULSE vision and
change rubrics, version 1.0: A valid and equitable tool to measure transformation of life
sciences departments at all institution types. CBE-Life Sciences Education, 15(4), art. 60.
Available: http://www.lifescied.org/content/15/4/ar60.full [March 2017].

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INTRODUCTION 33

Brewer, C.A., and Smith, D. (2011). Vision and Change in Undergraduate Biology Education:
A Call to Action. Final Report of a National Conference organized by the American As-
sociation for the Advancement of Science. Washington, DC: American Association for
the Advancement of Science. Available: http://visionandchange.org/files/2011/03/Revised-
Vision-and-Change-Final-Report.pdf [May 2017].
Brizius, J.A., and Campbell, M.D. (1991). Getting Results: A Guide for Government Account-
ability. Washington, DC: Council of Governors’ Policy Advisors.
Butz, W.P., Bloom, G.A., Gross, M.E., Kelly, T.K., Kofner, A., and Rippen, H.E. (2003). Is
There a Shortage of Scientists and Engineers? How Would We Know? RAND Science
and Technology Issue Paper. Available: https://www.rand.org/content/dam/rand/pubs/
issue_papers/2005/IP241.pdf [August 2017].
Carnevale, A.P., Smith, N., and Melton, M. (2011). STEM: Science, Technology, Engineer-
ing, and Mathematics. Washington, DC: Georgetown University Center on Education
and the Workforce. Available: https://cew.georgetown.edu/wp-content/uploads/2014/11/
stem-complete.pdf [July 2017].
Dadgar, M., and Weiss, M.J. (2012). Labor Market Returns to Sub-Baccalaureate Credentials:
How Much Does a Community College Degree or Certificate Pay? CCRC Working
Paper No. 45. New York: Columbia University, Teachers College, Community College
Research Center.
Ginder, S., and Stearns, C. (2014). Web Tables: Enrollment in Distance Education Courses by
State: Fall 2012. NCES 2014-023. Washington, DC: U.S. Department of Education, Na-
tional Center for Education Statistics. Available: https://nces.ed.gov/pubsearch/pubsinfo.
asp?pubid=2014023 [August 2017].
Hill, H., and Grossman, P. (2013). Learning from teacher evaluation: Challenges and oppor-
tunities. Harvard Educational Review, 82(1), 123–141.
Ho, A.D., Reich, J., Nesterko, S., Seaton, D.T., Mullaney, T., Waldo, J., and Chuang, I. (2014).
HarvardX and MITx: The First Year of Open Online Courses. HarvardX and MITx
Working Paper No. 1. Available: https://harvardx.harvard.edu/multiple-course-report
[August 2017].
Holzer, H.J., and Lerman, R.I. (2007). America’s Forgotten Middle Skill Jobs: Education and
Training Requirements in the Next Decade and Beyond. Available: http://www.urban.
org/sites/default/files/publication/31566/411633-America-s-Forgotten-Middle-Skill-Jobs.
PDF [March 2017].
Institute of Medicine. (2001). Crossing the Quality Chasm: A New Health System for the 21st
Century. Washington, DC: National Academy Press.
Kolowich, S., and Newman, J. (2013). The professors behind the MOOC hype. The ­Chronicle
of Higher Education, March 18. Available: http://www.chronicle.com/article/The-­
Professors-Behind-the-MOOC/137905 [August 2017].
Lowell, B.L., and Salzman, H. (2007). Into the Eye of the Storm: Assessing the Evidence
on Science and Engineering Education, Quality, and Workforce Demand. Washing-
ton, DC: The Urban Institute. Available: http://www.urban.org/sites/default/files/
publication/46796/411562-Into-the-Eye-of-the-Storm.PDF [August 2017].
Lumina Foundation. (2015). The Degree Qualifications Profile: A Learning-Centered Frame-
work for What College Graduates Should Know and Be Able to Do to Earn the Associ-
ate’s, Bachelor’s or Master’s Degree. Indianapolis, IN: Lumina Foundation. Available:
https://www.luminafoundation.org/files/resources/dqp.pdf [November 2015].
Maeroff, G.I. (2003). A Classroom of One: How Online Learning Is Changing Our Schools
and Colleges. New York: Palgrave Macmillan.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

34 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Matchett, K., Dahlberg, M., and Rudin, T. (2016). Quality in the Undergraduate Experience:
What Is It? How Is It Measured? Who Decides? Summary of a Workshop. Washing-
ton, DC: The National Academies Press. Available: http://www.nap.edu/catalog/23514/
quality-in-the-undergraduate-experience-what-is-it-how-is [July 2016].
Matsudaira, J. (2015). Defining and Measuring Quality in Higher Education. Paper com-
missioned for the Board on Higher Education and the Workforce Meeting on Quality
in Higher Education, December 14-15. Available: http://sites.nationalacademies.org/cs/
groups/pgasite/documents/webpage/pga_170937.pdf [July 2017].
Mankiw, N.G. (2003). Principles of Microeconomics (third ed.). Boston, MA: South-Western
College.
National Academies of Sciences, Engineering, and Medicine. (2016a). Barriers and Oppor-
tunities for 2-Year and 4-Year STEM Degrees: Systemic Change to Support Students’
Diverse Pathways. Washington, DC: The National Academies Press. Available: http://
www.nap.edu/catalog/21739/barriers-and-opportunities-for-2-year-and-4-year-stem-­
degrees [March 2016].
National Academies of Sciences, Engineering, and Medicine (2016b). Science Literacy: Con-
cepts, Contexts, and Consequences. Washington, DC: The National Academies Press.
Available: https://www.nap.edu/catalog/23595/science-literacy-concepts-contexts-and-
consequences [August 2017].
National Academy of Sciences, National Academy of Engineering, and Institute of Medicine.
(2007). Rising Above the Gathering Storm: Energizing and Employing America for a
Brighter Future. Washington, DC: The National Academies Press. Available: http://www.
nap.edu/catalog/11463/rising-above-the-gathering-storm-energizing-and-­employing-
america-for [March 2016].
National Academy of Sciences, National Academy of Engineering, and Institute of Medi-
cine. (2010). Rising Above the Gathering Storm, Revisited: Rapidly Approaching Cat-
egory 5. Washington, DC: The National Academies Press. Available: https://www.nap.
edu/­catalog/11463/rising-above-the-gathering-storm-energizing-and-employing-america-
for [August 2017].
National Research Council. (2011). Expanding Underrepresented Minority Participation:
America’s Science and Technology Talent at the Crossroads. Washington, DC: The
National Academies Press. Available: https://www.nap.edu/catalog/12984/expanding-
underrepresented-minority-participation-americas-science-and-technology-talent-at [Au-
gust 2017].
National Research Council. (2012). Discipline-Based Education Research: Understanding
and Improving Learning in Undergraduate Science and Engineering. Washington, DC:
The National Academies Press. Available: http://www.nap.edu/catalog/13362/discipline-
based-education-research-understanding-and-improving-learning-in-undergraduate
[March 2016].
National Research Council. (2014). Capturing Change in Science, Technology, and Innovation:
Improving Indicators to Inform Policy. Washington, DC: The National Academies Press.
Available: https://www.nap.edu/catalog/18606/capturing-change-in-science-technology-
and-innovation-improving-indicators-to [August 2017].
National Science and Technology Council. (2013). Federal STEM Education 5-Year Strate-
gic Plan. Available: https://www.whitehouse.gov/sites/default/files/microsites/ostp/stem_
stratplan_2013.pdf [March 2016].
National Science Foundation. (2014). Science and Engineering Indicators 2014. Arlington, VA:
Author. Available: https://www.nsf.gov/statistics/seind14 [February 2018].
National Science Foundation. (2015). Revisiting the STEM Workforce: A Companion to Sci-
ence and Engineering Indicators 2014. Arlington, VA: Author. Available: http://www.nsf.
gov/pubs/2015/nsb201510/nsb201510.pdf [March 2016].

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INTRODUCTION 35

National Science Foundation. (2016). Science and Engineering Indicators 2016. Arlington, VA:
Author. Available: https://www.nsf.gov/statistics/2016/nsb20161/# [July 2017].
National Science Foundation, National Center for Science and Engineering Statistics. (2017).
Women, Minorities, and Persons with Disabilities in Science and Engineering: 2017.
Special Report NSF 17-310. Arlington, VA: Author. Available: https://www.nsf.gov/
statistics/2017/nsf17310 [August 2017].
Oakes, J. (1986). Educational Indicators: A Guide for Policymakers. Santa Monica, CA:
Center for Policy Research in Education.
Odden, A. (1990). Educational indicators in the United States: The need for analysis. Educa-
tional Researcher, 19(5), 24–29.
Parsad, B., and Lewis, L. (2008). Distance Education at Degree-Granting Postsecondary
Institutions: 2006–07 (NCES 2009–044). National Center for Education Statistics, Insti-
tute of Education Sciences, U.S. Department of Education. Washington, DC. Available:
https://nces.ed.gov/pubs2009/2009044.pdf [August 2017].
Planty, M., and Carlson, D. (2010). Understanding Education Indicators: A Practical Primer
for Research and Policy. New York: Teachers College Press.
President’s Council of Advisors on Science and Technology. (2012). Engage to Excel: Produc-
ing One Million Additional College Graduates with Degrees in Science, Technology,
Engineering, and Mathematics. Available: https://www.whitehouse.gov/sites/default/files/
microsites/ostp/pcast-engage-to-excel-final_feb.pdf [March 2016].
Radford, A.W. (2011). Stats in Brief: Learning at a Distance: Undergraduate Enrollment in
Distance Education Courses and Degree Programs. (NCES 2012-154). Washington,
DC: National Center for Education Statistics, U.S. Department of Education. Available:
https://nces.ed.gov/pubs2012/2012154.pdf [August 2017].
Romer, P.M. (1990). Endogenous technological change. Journal of Political Economy, 98,
S5, S71.
Rothwell, J. (2013). The Hidden STEM Economy. Metropolitan Policy Program at
Brookings Institution. Available: http://www.brookings.edu/~/media/research/files/
reports/2013/06/10-stem-economy-rothwell/thehiddenstemeconomy610.pdf [July 2017].
Shavelson, R.J., McDonnell, L.M., and Oakes, J. (1989). Indicators for Monitoring Mathematics
and Science Education. Santa Monica, CA: RAND. Available: http://www.rand.org/pubs/
reports/R3742.html [July 2017].
Snyder, T.D., de Brey, C., and Dillow, S.A. (2016). Digest of Education Statistics 2014.
(NCES 2016-006). Washington, DC: National Center for Education Statistics, Institute
of Education Sciences, U.S. Department of Education. Available: https://nces.ed.gov/
pubs2016/2016006.pdf [August 2017].
Solow, R.M. (1957). Technical change and the aggregate production function. The Review of
Economics and Statistics, 39, 312–320.
Stevens, A.H., Kurlaender, M., and Grosz, M. (2015). Career Technical Education and Labor
Market Outcomes: Evidence from California Community Colleges. (NBER Working
Paper No. 21137). Cambridge, MA: National Bureau of Economic Research. Available:
http://www.nber.org/papers/w21137 [August 2017].
Weisberg, D., Sexton, S., Mulhern, J., and Keeling, D. (2009). The Widget Effect: Our N
­ ational
Failure to Acknowledge and Act on Differences in Teacher Effectiveness. B ­ rooklyn, NY:
The New Teacher Project.
Wilson, S.M., and Anagnostopolous, D. (in press). The seen and the foreseen: Will unintended
consequences thwart efforts to (re)build trust in teacher preparation? Journal of Teacher
Education.
Xie, Y., and Killewald, A.A. (2012). Is American Science in Decline? Cambridge, MA: Harvard
University Press.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Conceptual Framework for


the Indicator System

C
hapter 1 introduced the study charge and briefly described trends
in the larger social, economic, educational, and scientific and tech-
nological context that may influence the quality of undergraduate
STEM education. In this chapter the committee focuses more narrowly on
dimensions of undergraduate STEM education that are closely related to
student learning and success, presenting a simplified conceptual framework
to guide its development of indicators.
As background for discussing the framework, the committee notes its
conceptual process of arriving at the indicators proposed in this report.
First, the committee adopted a systems perspective on higher education.
Then, it identified three overarching goals for improving undergraduate
STEM education, asking: What are the key targets that represent the best
leverage toward the committee’s vision for undergraduate STEM educa-
tion? After identifying these goals, the committee then operationalized
each one by identifying specific objectives, or elements of the goal, that
need to be addressed in order to meet the goal in its entirety. Identifying
these discrete objectives, described below, allowed the committee to move
forward with developing specific indicators, designed to measure progress
toward meeting the specified objectives, and ultimately, to monitor the
status and quality of undergraduate STEM education. The committee’s
conceptual framework represents the process of students moving through
higher education as institutions seek to produce graduates capable of
meeting the grand challenges of society (as mentioned in Chapter 1): see
Figure 2-1. This overall framework will enable readers to envision the
conceptual basis for the proposed indicator system, indicating how each

37

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

38 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Societal Context: Global Competition, Increasingly Diverse Students, Accountability


Pressures, Changing Technology (outside the basic framework)

Educational Environment
Outcomes
Inputs Educational Processes Graduates with
Students Experience Evidence-Based STEM Knowledge
Incoming Students
STEM Education and Skills

Aspects of Departments, Programs, and


Institutions That Affect the Quality of
Undergraduate STEM Educational
Processes

FIGURE 2-1  Basic conceptual framework.

goal (and supporting objectives) maps onto the higher education system in
all its complexity: see Figure 2-2.

A SYSTEMS VIEW OF HIGHER EDUCATION


As a first step in developing the conceptual framework, the commit-
tee adopted an organizational systems perspective (Katz and Kahn, 1966,
1978). Viewing U.S. undergraduate education as a complex, open system
facilitates understanding of how to improve it. For example, Austin (2011)
used a systems approach to identify and understand factors influencing in-
dividual STEM faculty members’ decisions about adopting evidence-based
teaching strategies. Adopting a similar systems perspective, the committee’s
conceptual framework begins with a generic process model of the higher
education system: see Figure 2-1. The model has four components: inputs
(students entering higher education); processes (educational experiences
of the students); environment (which shapes the process); and outcomes
(students leaving higher education with skills and knowledge).
The committee recognizes that the process of undergraduate STEM
education is not always linear, as depicted in Figure 2-1. Students follow
increasingly complex trajectories through undergraduate education, trans-
ferring across institutions, dropping out for periods, and switching into and
out of STEM majors at different times (National Academies of Sciences,
Engineering, and Medicine, 2016a). STEM instructors—including faculty
members, adjunct faculty, graduate teaching assistants, and all o ­ thers who
teach undergraduates—work within multiple, interacting contexts and feed-
back loops that influence decisions about teaching. Austin (2011) viewed

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

CONCEPTUAL FRAMEWORK FOR THE INDICATOR SYSTEM 39

Educational Environment
1.2. Supports that help STEM instructors use
evidence-based practices
1.3. Institutional culture that values undergraduate
STEM
2.3. Representational diversity among STEM
instructors
Inputs 2.4. Inclusive institutions and STEM departments Outcomes
(Incoming Students) (Graduates with
2.1. Equity of access to Educational Processes STEM Knowledge
high-quality 1.1. Use of evidence- and Skills)
undergraduate STEM based educational 2.2. Representational
programs and practices equity among STEM
experiences 1.4. Continuous credential earners
improvement 3.3. STEM credential
3.1. Foundational attainment
preparation for STEM
for all students
3.2. Successful
navigation

FIGURE 2-2 Detailed conceptual framework.

these contexts—including departments, colleges, institutions, and such


external groups as accrediting associations, parents and employers, and
state and federal governments—as “levels” of the system that influence
instructors’ work. She also observed that various elements in organizations
can serve either as useful levers for or as barriers to change. Key levers in
colleges and universities that may encourage or discourage adoption of
evidence-based teaching strategies include evaluation and reward systems,
workload allocation, professional development opportunities, and the stra-
tegic use of leadership practices. Thus, linear approaches to change that
address only one factor or intervention are unlikely to lead to sustained
adoption of evidence-based teaching practices. Austin (2011) concluded
that, in such complex organizations, change efforts are most likely to be
effective when they use both a “top-down” and a “bottom-up” approach,
consider the multiple factors and contexts that influence instructors’ work,
and strategically use multiple levers of change.
The committee recognizes that the environment surrounding under-
graduate STEM includes not only the external groups and individuals
mentioned above (accreditors, parents, employers), but also others, such as
disciplinary bodies and K-12 educators. However, the committee’s generic
process model focuses on the most immediate components of the higher
education environment—departments, colleges, and institutions—­reflecting
its charge to develop indicators for undergraduate STEM education.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

40 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

GOALS FOR UNDERGRADUATE STEM EDUCATION


Following from this model of higher education as a complex system,
the committee addressed its charge to “identify objectives for improving
undergraduate STEM education and a set of indicators to document the
status and quality of undergraduate STEM education at the national level
over multiple years.” The committee identified three overarching goals for
improving the quality of undergraduate STEM education. It then drew
on relevant literature to identify objectives related each of the three goals.
The specific targets for improvement reflected in these goals and objectives
provide a focus for monitoring the status of undergraduate STEM educa-
tion over time.

GOAL 1: I ncrease Students’ Mastery of STEM Concepts and Skills by


Engaging Them in Evidence-Based STEM Educational Prac-
tices and Programs. Engage undergraduate students in STEM
learning experiences and programs backed by research and
supported by evidence.
GOAL 2: Strive for Equity, Diversity, and Inclusion of STEM Students
and Instructors by Providing Equitable Opportunities for
Access and Success. Broaden participation such that the stu-
dents participating in undergraduate STEM programs are
representative of the demographics of the national popula-
tion of undergraduate students. Ensure that STEM learning
environments are inclusive and effectively engage and edu-
cate diverse learners.
GOAL 3: Ensure Adequate Numbers of STEM Professionals by In-
creasing Completion of STEM Credentials as Needed in the
Different STEM Disciplines. Increase the number of gradu-
ates and certificate holders to meet the grand challenges of
society.

In developing goals and the objectives that follow from them, the com-
mittee considered not only its basic framework (refer to Figure 2-1) but also
other models of change in higher education (e.g., Elrod and Kezar, 2015,
2016; Henderson, Beach, and Finkelstein, 2011). These various models of
undergraduate education as a complex, interacting system were helpful as
the committee considered the most important levers for improvement and
identified objectives to be monitored through an indicator system.
In addition, the committee derived its goals in part from a similar set
of statements in the recent report, Monitoring Progress Toward Successful
K–12 STEM Education (National Research Council, 2013) which in turn
followed a related report on K–12 STEM education (National Research

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

CONCEPTUAL FRAMEWORK FOR THE INDICATOR SYSTEM 41

Council, 2011). Although there are clear parallels between the goals dis-
cussed in that pair of reports and the committee’s three goals, the commit-
tee’s goals reflect the different challenges and contexts of the K–12 and the
higher education sectors. In response to policy makers’ questions and in-
creasing accountability pressures, the higher education sector is particularly
concerned about students’ outcomes, especially the employment outcomes
that are reflected in Goal 3. However, ensuring adequate numbers of STEM
professionals (Goal 3) will not be possible without first attending to the
STEM educational processes and environment reflected in Goals 1 and 2.
These three goals are interconnected and mutually supportive, targeting
improvement in various elements of the undergraduate education system
and the interactions of these elements that together will enhance students’
success in STEM education. Advancing the goals will require strategic use
of multiple change levers within and across the multiple levels of the higher
education system, using both top-down and bottom-up approaches (Austin,
2011). The goals are applicable to all varieties of undergraduate STEM
educational experiences and are designed to enhance those experiences to
the greatest extent possible. The systems perspective reflected in these goals
is also essential in developing indicators to monitor progress, because an
educational indicator system not only measures an educational system’s
inputs, processes, and outputs, but also suggests how they work together to
produce an overall effect on students (Odden, 1990, pp. 24-25; Shavelson,
McDonnell, and Oakes, 1991).
A growing body of research has identified the STEM teaching and
learning experiences and equity and inclusion strategies that support all
students’ mastery of STEM concepts and skills and persistence to gradua-
tion. Widely deploying these evidence-based processes is essential to ensure
adequate numbers of STEM professionals. As noted in Chapter 1, the most
rapidly growing groups within the general population are often underrep-
resented in STEM education and employment fields. These groups provide
an untapped resource of talent, and Goal 2 focuses on changing the edu-
cational processes and environment to increase their engagement and suc-
cess in undergraduate STEM education (Summers and Hrabowski, 2006;
National Academy of Sciences, National Academy of Engineering, and
Institute of Medicine 2011; National Academies of Sciences, Engineering,
and Medicine, 2016a). Thus, advancing the three complementary goals will
sustain a robust STEM workforce that contributes to national economic
growth and international competitiveness (President’s Council of Advisors
on Science and Technology, 2012; Xie and Killewald, 2012). The rest of this
section discusses the committee’s three goals in more detail.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

42 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Goal 1: Increase Students’ Mastery of STEM Concepts and Skills


As noted in Chapter 1, parents, employers, policy makers, and society
at large often ask whether, and to what extent, students are learning the
content, skills, and abilities that will serve them for their lives and careers
after graduation. And as discussed in Chapter 1, there is no simple way to
answer this question.
Although there are no agreed-upon measures of students’ STEM learn-
ing, an abundance of research has demonstrated that certain common
approaches to teaching, learning, and co-curricular programs can improve
student learning and degree completion in STEM disciplines (Fairweather,
2012; Kober, 2015; National Research Council, 2012). At the same time,
research has shown that poor instructional strategies often discourage
persistence in STEM programs of study even among students who are
academically capable to engage in them and were originally interested in
STEM fields (Correll, Seymour, and Hewitt, 1997). However, the strategies
that have evidence of effectiveness have not yet been widely implemented
(­National Research Council, 2012). Goal 1 addresses this problem, call-
ing for broad implementation of teaching approaches and programs that
researchers have identified as most effective for helping students master
core STEM concepts and skills. Doing so will require that institutions and
their academic units examine their underlying approaches to teaching
and curricular design (Elrod and Kezar, 2015, 2016; Henderson, Beach, and
­Finkelstein, 2011; Weaver et al., 2015). Institutions will also need to con-
sider the educational experiences they offer outside the classroom, such as
internships, mentoring, and advising.
The work of engaging students in evidence-based teaching and learn-
ing experiences and co-curricular programs rests partly on the shoulders of
instructors and staff—those who are on the front lines in education. How-
ever, as noted above, the work of these individuals is embedded in a com-
plex system that involves norms, resources, evaluation systems, and reward
and recognition practices in departmental, institutional, and disciplinary
­cultures—the educational environment depicted in Figure 2-1 (Austin, 2011;
Weaver et al., 2015). Thus, increasing the use of evidence-based STEM
educational practices can only happen with support from departmental and
institutional cultures. Such support includes real alignment between institu-
tions’ statements about the value of undergraduate teaching and learning
and the explicit valuing of teaching by those institutions. Reward and rec-
ognition structures will need to be part of that explicit valuing and robust,
reliable forms of evaluating instruction will have to be put in place to make
that possible. These approaches will also allow the improvement of students’
educational experiences to be implemented in a scholarly way, based on
existing literature and depending on evidence for continuous improvement.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

CONCEPTUAL FRAMEWORK FOR THE INDICATOR SYSTEM 43

Goal 2: Strive for Equity, Diversity, and Inclusion


Goal 2 involves broadening participation so that the students who par-
ticipate in postsecondary STEM programs are representative of the national
population who could participate in those programs and ensuring that
STEM learning environments are inclusive and effectively engage and edu-
cate diverse learners. Equity, diversity, and inclusion are distinct concepts;
yet, all three are critically important to ensuring that the STEM educational
system meets the nation’s needs and serves all people (Association of Ameri-
can Colleges & Universities, 2015; Withem et al., 2015).
The goal of striving for equity, diversity, and inclusion so that STEM
learners are as diverse as the country’s national talent pool and that STEM
workforce opportunities are equally available to all is both ethical and
critical to continuing national innovation and global competitiveness. In
comparison with previous generations of undergraduates, today’s under-
graduate students are more likely to be female, Black, Hispanic, from
low-income families, and single parents (National Academies of Sciences,
Engineering, and Medicine, 2016a). Although recent data show that these
populations are as interested in STEM fields as their white peers, they
are far less likely to complete STEM degrees. Retaining diverse students
in STEM that reflect the national population is essential to achieving the
increased numbers of STEM students called for in Goal 3.
Many of today’s most challenging scientific and technical issues are
global in nature and can best be addressed by combining diverse exper-
tise across disciplinary boundaries, along with community perspectives
(­National Research Council, 2015). Recent research suggests that science
teams comprised of ethnically and geographically diverse members may
be more effective than those that are more homogeneous (Freeman and
Huang, 2014a,b). More broadly, as the national economy continues to
recover from recession, providing equitable employment opportunities for
women, minorities, and people with disabilities would facilitate economic
growth and reduce income inequality, according to OECD (2016).

Goal 3: Ensure Adequate Numbers of STEM Professionals


Goal 3 seeks to increase the numbers of students who complete STEM
credential and degree programs, both to meet demand for STEM profes-
sionals in some fields of STEM and to prepare these graduates to participate
fully in society. The committee assumes that engaging students in evidence-
based STEM educational practices (Goal 1) and ensuring their full inclusion
and equity (Goal 2) will increase the number of students who receive STEM
credentials (Goal 3). Progress toward Goal 3 will be influenced by many
factors, such as admission review processes, summer “bridge” programs,

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

44 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

and recruitment practices. In proposing Goal 3, the committee assumes that


a more technologically oriented society requires more people with expertise
in science and engineering for economic success and to meet global competi-
tion (President’s Council of Advisors on Science and Technology, 2012; Xie
and Killewald, 2012). But it does not assume that all STEM graduates will
be part of the STEM workforce to reap these economic benefits. Rather,
the committee thinks that increasing the number of people with an under-
standing of STEM ideas and ways of thinking will benefit all segments of
society. As discussed in Chapter 1, STEM knowledge and skills are valuable
in a broad range of occupations, beyond those formally classified as STEM
occupations (Carnevale, Smith, and Melton, 2011; Rothwell, 2013).

ARTICULATING GOALS AS OBJECTIVES


The conceptual framework represented by the committee’s generic
process model and its three overarching goals could be articulated as many
different potential objectives for improving undergraduate STEM educa-
tion. To identify what it considers to be the most important objectives for
improving the quality of undergraduate STEM, the committee took several
steps, which are discussed below.

The Federal STEM Education Strategic Plan


Following the committee charge to consider the federal STEM educa-
tion strategic plan as a starting point, we reviewed the plan of the National
Science and Technology Council (NSTC) (2013), which included the goal
of enhancing undergraduates’ STEM experiences as a way to reduce stu-
dent attrition from STEM majors and thus help achieve the prior federal
goal of graduating 1 million additional STEM majors over the next decade
(President’s Council of Advisors on Science and Technology, 2012). The
committee’s framework follows a similar approach.
As noted in Chapter 1, NSTC identified four strategic objectives for
improving students’ undergraduate experiences and reducing attrition:
(1) promoting evidence-based instructional practices; (2) improving STEM
experiences in community colleges; (3) expanding undergraduate research
experiences; and (4) advancing success in the key gateway of introductory
mathematics. The committee adopted NSTC objective (1), modifying it to
incorporate aspects of NSTC objective (3) (see below). Because its charge
encompasses STEM education at both 2-year and 4-year institutions, the
committee’s proposed objectives aim for improvement at both types of insti-
tutions and we did not adopt a specific objective similar to NSTC objective
(2). The committee adopted NSTC objective (4) but broadened it to address

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

CONCEPTUAL FRAMEWORK FOR THE INDICATOR SYSTEM 45

retention of students in key gateway courses in all STEM fields. The com-
mittee notes that elements of NSTC objectives (2) and (3) were specific to
the federal government’s role, calling for increased federal support of cer-
tain aspects of undergraduate STEM and do not represent broad national
objectives for the U.S. higher education system as a whole.

Criteria for Identifying Objectives


Students’ attainment of STEM credentials (e.g., certificates, degrees)
and development of STEM knowledge and skills are complex processes,
influenced by a variety of factors that operate within and across multiple
layers of the educational system. Although each student (i.e., background,
cognitive and social-psychological characteristics, and level of preparation)
is a central actor in these processes, it is now widely understood that a
student’s experiences, the larger college environment, and the instructors
and staff all play a critical role in a student’s progress (e.g., Astin, 1993;
Braxton, 2000; Kuh et al., 2007; Tinto, 1993). This is true for students in
all fields, and it has been specifically demonstrated for students in STEM
fields (Xie, Fang and Shauman, 2015).
To identify the most important objectives for improving undergradu-
ate STEM within this complex system, the committee reviewed research
related to its three goals, focusing on the factors identified in the research
as most critical for advancing these goals. Drawing on the literature review
in a related National Academies study (National Academies of Sciences,
Engineering, and Medicine, 2016a), the committee considered factors at
multiple levels of the higher education system. To weigh the importance of
various factors emerging from the literature related to each goal, the com-
mittee adopted the following criteria for identifying objectives:

1. Evidence of importance or efficacy to STEM educational outcomes:


To what extent is there evidence to link the objective to the desired
outcomes? The committee sought to identify the most important,
high leverage points within the higher education system depicted
in Figure 2-1.
2. Applicability across multiple institution types. To what extent is the
objective relevant to all of the diverse types of 2-year and 4-year,
public, and private higher education institutions in the United
States? This criterion reflects the committee’s charge to develop
objectives for improving undergraduate STEM at both 2-year and
4-year institutions and to develop a national indicator system rel-
evant across all types of institutions.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

46 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

3. Emphasis on first 2 years. To what extent is the objective relevant


to the first 2 years of undergraduate STEM? This criterion reflects
the committee’s charge to focus on the first 2 years of undergradu-
ate STEM. Given that STEM course-taking and performance dur-
ing the first 2 years of college are key determinants of persistence in
STEM (Bettinger, 2010; Chen and Soldner, 2013) and that much at-
trition from STEM programs occurs within the first 2 years (Chang
et al., 2008; Seymour and Hewitt, 1997), these years are critical for
improving student success in STEM. Thus, the framework empha-
sizes objectives relevant to the first 2 years, while still leaving room
to include highly important processes or characteristics relevant
beyond the first 2 years.

The committee also considered several cross-cutting issues. As men-


tioned in Chapter 1, institutions of higher education across the country
have enormously different aims and missions related to STEM education
and, relatedly, serve hugely diverse student populations. As a result, STEM
coursework and curricular standards vary from institution to institution,
and students who may be prepared to do the work in one institution may
be ill-equipped to meet the standards of another. Across the objectives listed
below it is critical to consider the necessary differences in how institutions
meet their own stated goals in light of the preparation and expectations of
their student populations.

The Objectives
The committee selected 11 objectives for improving undergraduate
STEM, grouped under the committee’s three overarching goals.

GOAL 1: Increase Students’ Mastery of STEM Concepts and Skills by


Engaging Them in Evidence-Based STEM Educational Practices and
Programs.
1.1 Use of evidence-based STEM educational practices both in and
outside of classrooms
1.2 Existence and use of supports that help instructors use evidence-
based STEM educational practices
1.3 An institutional culture that values undergraduate STEM education
1.4 Continuous improvement in STEM teaching and learning

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

CONCEPTUAL FRAMEWORK FOR THE INDICATOR SYSTEM 47

GOAL 2: Strive for Equity, Diversity, and Inclusion of STEM Students


and Instructors by Providing Equitable Opportunities for Access and
Success.
2.1 Equity of access to high-quality undergraduate STEM educational
programs and experiences
2.2 Representational diversity among STEM credential earners
2.3 Representational diversity among STEM instructors
2.4 Inclusive environments in institutions and STEM departments

GOAL 3: Ensure Adequate Numbers of STEM Professionals.


3.1 Foundational preparation for STEM for all students
3.2 Successful navigation into and through STEM programs of study
3.3 STEM credential attainment

These objectives and their relationship to the three goals are shown in
Figure 2-2. The objectives are designed to improve the quality in each com-
ponent of the basic conceptual framework: inputs, processes, environment,
and outcomes. However, the objectives primarily target improvement of the
educational processes, environments, and outcomes. Although the inputs,
the incoming students, influence the quality of undergraduate STEM educa-
tion, some of the characteristics of the students reflect K–12 preparation,
which lies outside the scope of the study charge.
The detailed framework shown in Figure 2-2 illustrates students’ en-
trance to 2-year or 4-year colleges, their STEM-related learning experiences
inside and outside the classroom, the environments that surround students
and instructors, and student outcomes, including credentials and knowledge
of STEM concepts and skills.

PROPOSED INDICATORS
The objectives identified in the detailed framework drove the com-
mittee’s development of indicators: Table 2-1 presents the committee’s
proposed indicators in concert with the committee’s framework and objec-
tives. The next three chapters of the report describe those objectives and
indicators.

CONCLUSION
In this chapter, the committee has proposed a conceptual framework
for the indicator system. Beginning with a model of higher education as

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

48 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

TABLE 2-1  Framework, Objectives, and Indicators


Framework Objectives Indicators
GOAL 1: Increase Students’ Mastery of STEM Concepts and Skills by Engaging Them in
Evidence-Based STEM Educational Practices and Programs

Process 1.1 Use of evidence- 1.1.1 Use of evidence-based


based STEM educational STEM educational practices in
practices both in and course development and delivery
outside of classrooms
1.1.2 Use of evidence-based
STEM educational practices
outside the classroom

Environment 1.2 Existence and use of 1.2.1 Extent of instructors’


supports that help STEM involvement in professional
instructors use evidence- development
based learning experiences
1.2.2 Availability of support or
incentives for evidence-based
course development or course
redesign

Environment 1.3 An institutional culture 1.3.1 Use of valid measures of


that values undergraduate teaching effectiveness
STEM instruction
1.3.2 Consideration of evidence-
based teaching in personnel
decisions by departments and
institutions

Process 1.4 Continuous No indicators: see “Challenges


improvement in STEM of Measuring Continuous
teaching and learning Improvement” in Chapter 3

GOAL 2: Strive for Equity, Diversity, and Inclusion of STEM Students and Instructors by
Providing Equitable Opportunities for Access and Success
Input 2.1 Equity of access to 2.1.1 Institutional structures,
high-quality undergraduate policies, and practices that
STEM educational strengthen STEM readiness for
programs and experiences entering and enrolled college
students

2.1.2 Entrance to and persistence


in STEM academic programs

2.1.3 Equitable student


participation in evidence-based
STEM educational practices

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

CONCEPTUAL FRAMEWORK FOR THE INDICATOR SYSTEM 49

TABLE 2-1  Continued


Framework Objectives Indicators
Outcome 2.2 Representational 2.2.1 Diversity of STEM
diversity among STEM degree and certificate earners
credential earners in comparison with diversity of
degree and certificate earners in
all fields

2.2.2 Diversity of students who


transfer from 2-year to 4-year
STEM programs in comparison
with diversity of students in
2-year STEM programs

2.2.3 Time to degree for students


in STEM academic programs

Environment 2.3 Representational 2.3.1 Diversity of STEM


diversity among STEM instructors in comparison with
instructors diversity of STEM graduate
degree holders

2.3.2 Diversity of STEM graduate


student instructors in comparison
with diversity of STEM graduate
students

Environment 2.4 Inclusive environments 2.4.1 Students pursuing STEM


in institutions and STEM credentials feel included and
departments supported in their academic
programs and departments

2.4.2 Instructors teaching


courses in STEM disciplines feel
supported and included in their
departments

2.4.3 Institutional practices are


culturally responsive, inclusive,
and consistent across the
institution

GOAL 3: Ensure Adequate Numbers of STEM Professionals by Increasing Completion of


STEM Credentials as Needed in the Different STEM Disciplines
Process 3.1 Adequate foundational 3.1.1 Completion of foundational
preparation for STEM for courses, including developmental
all students education courses, to ensure
STEM program readiness

continued

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

50 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

TABLE 2-1  Continued


Framework Objectives Indicators
Process 3.2 Successful navigation 3.2.1 Retention in STEM
into and through STEM programs, course to course and
programs of study year to year

3.2.2 Transfers from 2-year


to 4-year STEM programs in
comparison with transfers to all
4-year programs

Outcome 3.3 STEM credential 3.3.1 Number of students who


attainment attain STEM credentials over
time, disaggregated by institution
type, transfer status, and
demographic characteristics

a complex system, the committee identified three overarching goals for


improving the quality of undergraduate STEM education. It then built on
the federal STEM education strategic plan and drew on relevant literature
to articulate each goal into more specific objectives. The specific targets
for improvement reflected in these goals and objectives provide a focus for
monitoring the status of undergraduate STEM education over time.

CONCLUSION 1  Improving the quality and impact of undergraduate


STEM education will require progress toward three overarching goals:

Goal 1: Increase students’ mastery of STEM concepts and skills by


engaging them in evidence-based STEM educational practices
and programs.
Goal 2: Strive for equity, diversity, and inclusion of STEM students
and instructors by providing equitable opportunities for access
and success.
Goal 3: Ensure adequate numbers of STEM professionals by increas-
ing completion of STEM credentials as needed in the different
STEM disciplines.

REFERENCES
Association of American Colleges & Universities. (2015). Committing to Equity and Inclusive
Excellence: A Campus Guide for Self-Study and Planning. Washington, DC: Author.
Astin, A.W. (1993). What Matters in College? Four Critical Years Revisited. San Francisco,
CA: Jossey-Bass.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

CONCEPTUAL FRAMEWORK FOR THE INDICATOR SYSTEM 51

Austin, A. (2011). Promoting Evidence-Based Change in Undergraduate Science Educa-


tion. Paper commissioned by the Board on Science Education. Available: http://sites.
nationalacademies.org/cs/groups/dbassesite/documents/webpage/dbasse_072578.pdf
[June 2016].
Bettinger, E. (2010). To be or not to be: Major choices in budding scientists. In C.T. Clotfelter
(Ed.), American Universities in a Global Market (pp. 69–98). Chicago, IL: University of
Chicago Press.
Braxton, J.M. (Ed.). (2000). Reworking the Student Departure Puzzle. Nashville, TN:
Vanderbilt University Press.
Carnevale, A.P., Smith, N., and Melton, M. (2011). STEM: Science, Technology, Engineer-
ing, and Mathematics. Washington, DC: Georgetown University Center on Education
and the Workforce. Available: https://cew.georgetown.edu/wp-content/uploads/2014/11/
stem-complete.pdf [July 2017].
Chang, M.J., Cerna, O., Han, J., and Sáenz, V. (2008). The contradictory roles of institutional
status in retaining underrepresented minorities in biomedical and behavioral science
majors. The Review of Higher Education, 31(4), 433–464.
Chen, X., and Soldner, M. (2013). STEM Attrition: College Students’ Paths into and out of
STEM Fields. Washington, DC: U.S. Department of Education.
Correll, S.J., Seymour, E., and Hewitt, N.M. (1997). Talking about leaving: Why undergradu-
ates leave the sciences. Contemporary Sociology, 26, 644. doi10.2307/2655673.
Elrod, S., and Kezar, A. (2015). Increasing student success in STEM. Peer Review, 17(2). Avail-
able: https://www.aacu.org/peerreview/2015/spring/elrod-kezar [June 2016].
Elrod, S., and Kezar, A. (2016). Increasing Student Success in STEM: A Guide to Systemic
Institutional Change. Washington, DC: Association of American Colleges & Universities.
Fairweather, J. (2012). Linking Evidence and Promising Practices in Science, Technology,
Engineering, and Mathematics (STEM) Undergraduate Education: A Status Report
for The National Academies National Research Council Board on Science Education.
Available: http://sites.nationalacademies.org/cs/groups/dbassesite/documents/webpage/
dbasse_072637.pdf [June 2016].
Freeman, R.B., and Huang, W. (2014a). Collaborating with People Like Me: Ethnic Co-
Authorship within the U.S. NBER Working Paper No. 19905. Cambridge, MA: National
Bureau of Economic Research. Available: http://citeseerx.ist.psu.edu/viewdoc/download?
doi=10.1.1.702.2502&rep=rep1&type=pdf [February 2018].
Freeman, R.B., and Huang, W. (2014b). Strength in diversity. Nature, 513, 305.
Henderson, C., Beach, A., and Finkelstein, N. (2011). Facilitating change in undergraduate
STEM instructional practices: An analytic review of the literature. Journal of Research
in Science Teaching, 48(8), 952–984. doi10.1002/tea.20439.
Katz, D., and Kahn, R.L. (1966). The Social Psychology of Organizations. New York: John
Wiley & Sons.
Katz, D., and Kahn, R.L. (1978). The Social Psychology of Organizations. 2nd ed. New York:
John Wiley & Sons.
Kober, N. (2015). Reaching Students: What Research Says About Effective Instruction in
Undergraduate Science and Engineering. Washington, DC: The National Academies
Press. Available: http://www.nap.edu/catalog/18687/reaching-students-what-research-
says-about-effective-instruction-in-undergraduate [June 2016].
Kuh, G.D., Kinzie, J., Buckley, J., Bridges, B., and Hayek, J.C. (2007). Piecing Together the
Student Success Puzzle: Research, Propositions, and Recommendations. (ASHE Higher
Education Report, vol. 32, issue 5). San Francisco, CA: Jossey-Bass.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

52 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

National Academies of Sciences, Engineering, and Medicine. (2016a). Barriers and Opportuni-
ties for 2-Year and 4-Year STEM Degrees: Systemic Change to Support Students’ Diverse
Pathways. Washington, DC: The National Academies Press. Available: http://www.nap.
edu/catalog/21739/barriers-and-opportunities-for-2-year-and-4-year-stem-degrees [June
2016].
National Academies of Sciences, Engineering, and Medicine. (2016b). Science Literacy:
Concepts, Contexts, and Consequences. Washington, DC: The National Academies Press.
National Academy of Sciences, National Academy of Engineering, and Institute of Medicine.
(2011). Expanding Underrepresented Minority Participation: America’s Science and
Technology Talent at the Crossroads. Washington, DC: The National Academies Press.
Available: http://www.nap.edu/catalog/12984/expanding-underrepresented-minority-
participation-americas-science-and-technology-talent-at [June 2016].
National Research Council. (2011). Successful K-12 STEM Education: Identifying Effective
Approaches in Science, Technology, Engineering, and Mathematics (STEM). Washing-
ton, DC: The National Academies Press. Available: http://www.nap.edu/catalog/13158/
successful-k-12-stem-education-identifying-effective-approaches-in-science [June 2016].
National Research Council. (2012). Discipline-Based Education Research: Understanding
and Improving Learning in Undergraduate Science and Engineering. Washington, DC:
The National Academies Press. Available: https://www.nap.edu/catalog/13362/discipline-
based-education-research-understanding-and-improving-learning-in-undergraduate [July
2017].
National Research Council. (2013). Monitoring Progress Toward Successful K-12 STEM Edu-
cation: A Nation Advancing? Washington, DC: The National Academies Press. Available:
https://www.nap.edu/search/?term=Monitoring+Progress+Toward+Successful+K-12+
STEM+Education%3A+A+Nation+Advancing%3F.+&x=16&y=6 [July 2017].
National Research Council. (2015). Enhancing the Effectiveness of Team Science. ­Washington,
DC: The National Academies Press. Available: http://www.nap.edu/catalog/19007/­
enhancing-the-effectiveness-of-team-science [June 2016].
National Science and Technology Council. (2013). Federal Science, Technology, Engineering,
and Mathematics (STEM) 5-Year Strategic Plan. Washington, DC: Author. Available:
https://www.whitehouse.gov/sites/default/files/microsites/ostp/stem_stratplan_2013.pdf
[June 2016].
Odden, A. (1990). Educational indicators in the United States: The need for analysis. Educa-
tional Researcher, 19(5), 24–29.
OECD. (2016). OECD Economic Surveys: United States. Available: http://www.oecd.org/eco/
surveys/United-States-2016-overview.pdf [June 2016].
President’s Council of Advisors on Science and Technology. (2012). Engage to Excel: Produc-
ing One Million Additional College Graduates with Degrees in Science, Technology,
Engineering and Mathematics. Washington, DC: Author. Available: https://www.white-
house.gov/sites/default/files/microsites/ostp/pcast-engage-to-excel-final_feb.pdf [March
2016].
Rothwell, J. (2013). The Hidden STEM Economy. Metropolitan Policy Program at Brookings
­Institution. Available: http://www.brookings.edu/~/media/research/files/reports/2013/06/10-
stem-economy-rothwell/thehiddenstemeconomy610.pdf [April 2015].
Seymour, E., and Hewitt, N. (1997). Talking about Leaving: Why Undergraduates Leave the
Sciences. Boulder, CO: Westview Press.
Shavelson, R.J., McDonnell, L., and Oakes, J. (1991). What are educational indicators and
indicator systems? Practical Assessment, Research and Evaluation, 2(11). Available:
http://pareonline.net/getvn.asp?v=2andn=11[July 2017].
Summers, M.F., and Hrabowski III, F.A. (2006). Preparing minority scientists and engineers.
Science, 311(5769), 1870–1871.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

CONCEPTUAL FRAMEWORK FOR THE INDICATOR SYSTEM 53

Tinto, V. (1993). Leaving College: Rethinking the Causes and Cures of Student Attrition.
(second ed.). Chicago, IL: University of Chicago Press.
Weaver, G.C., Burgess, W.D., Childress, A.L., and Slakey, L. (2015). Transforming Institu-
tions: Undergraduate STEM Education for the 21st Century. West Lafayette, IN: Purdue
University Press.
Withem, K., Malcom-Piqueux, L., Dowd, A.C., and Bensimon, E.M. (2015). America’s Unmet
Promise: The Imperative for Equity in Higher Education. Washington, DC: American
Association of Colleges & Universities.
Xie, Y., Fang, M., and Shauman, K. (2015). STEM education. Annual Review of Sociology,
41, 331–357.
Xie, Y., and Killewald, A.A. (2012). Is American Science in Decline? Cambridge, MA: Harvard
University Press.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Goal 1: Increase Students’ Mastery


of STEM Concepts and Skills

A
s noted in Chapter 1, the committee does not propose indicators to
directly measure student learning. Although some disciplines have
begun to identify the core concepts and skills that all undergradu-
ates should master (e.g., Arum, Roksa, and Cook, 2016; Brewer and Smith,
2011) and develop assessments of them, there is currently no agreement
on a uniform set of STEM-wide concepts and skills, nor on standardized
assessments of such concepts and skills. Rather, the committee expects that
engaging students in evidence-based STEM educational practices (Goal 1)
and striving for equity, diversity, and inclusion (Goal 2) will increase all
students’ mastery of STEM concepts and skills. Advancing these goals is
expected to improve persistence among students already interested in STEM
and attract other students to STEM majors, thus increasing the number
of students earning STEM credentials and ensuring adequate numbers
of STEM professionals (Goal 3). These expectations echo the President’s
Council of Advisors on Science and Technology (2012); it recommended
widespread adoption of evidence-based teaching and learning approaches
to increase the number of STEM graduates and ensure an adequate supply
of STEM professionals.
The major sections of this chapter address the committee’s four objec-
tives for Goal 1.

1.1: U se of evidence-based STEM educational practices both in and


outside of classrooms
1.2: Existence and use of supports that help STEM instructors use
evidence-based learning experiences

55

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

56 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

1.3: An institutional culture that values undergraduate STEM instruction


1.4: Continuous improvement in STEM teaching and learning

In Appendix B, the committee offers potential measures for some of the


indicators: specific quantitative variables that provide a reliable method for
monitoring progress toward achieving the objective.
The systems view reflected in these four objectives aligns with current
approaches to systemic reform of undergraduate STEM education. For
example, the Association of American Universities Undergraduate STEM
Education Initiative is guided by a framework placing pedagogy at the
center surrounded by scaffolding and cultural change (Miller and Trapani,
2016); this scaffolding includes providing professional development and
ongoing collection and ongoing analysis of data to evaluate and improve
program performance.
Each section of the chapter summarizes the research that demonstrates
the importance of the objective for improving the quality of undergradu-
ate STEM education and presents the committee’s proposed indicators to
monitor progress toward that objective. For each indicator, the committee

TABLE 3-1  Objectives and Indicators of Increasing Students’ Mastery of


STEM Concepts and Skills
Objective Indicators
1.1. Use of evidence-based educational 1.1.1 Use of evidence-based educational
practices both in and outside of practices in course development and delivery
classrooms.
1.1.2 Use of evidence-based educational
practices beyond the classroom

1.2. Existence and use of supports that 1.2.1 Extent of instructors’ involvement in
help STEM instructors use evidence-based professional development
educational practices.
1.2.2 Availability of support or incentives for
evidence-based course development or course
redesign

1.3 Institutional culture that values 1.3.1 Use of valid measures of teaching
undergraduate STEM instruction effectiveness

1.3.2 Consideration of evidence-based


teaching in personnel decisions by
departments and institutions

1.4 Continuous improvement in STEM No indicators: see “Challenges of Measuring


teaching and learning Continuous Improvement” in Chapter 2

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INCREASE STUDENTS’ MASTERY OF STEM CONCEPTS AND SKILLS 57

discusses the meaning of the indicator and identifies the additional research
needed to fully develop the indicators: see Table 3-1.

OBJECTIVE 1.1: USE OF EVIDENCE-BASED EDUCATIONAL


PRACTICES BOTH IN AND OUTSIDE OF CLASSROOMS

Importance of the Objective


Students’ mastery of STEM concepts and skills is supported by
e­ vidence-based STEM educational practices. As discussed in Chapters 1 and
2, there is a growing body of work that presents and reviews practices that
are supported by rigorous research (e.g., Freeman et al., 2014). The com-
mittee expects that this body of work will continue to expand and evolve.
For this reason, we do not provide a prescriptive list of those practices that
we currently view as “evidence based.” Rather, we define them as educa-
tional practices meeting at least one of the following criteria:

• the preponderance of published literature suggests that it will be


effective across settings or in the specific local setting, or
• the practice is built explicitly from accepted theories of teaching
and learning and is faithful to best practices of implementation, or
• the practice is locally collected, valid, and reliable evidence based
on a sound methodological research approach that suggests that it
is effective.

In this chapter, we use the term “evidence-based educational practices”


to represent the variety of educational practices that meet the above criteria.
These various practices have been shown to increase students’ mastery of
STEM concepts and skills, as well as promote positive attitudes toward
learning, and persistence toward a degree (Fairweather, 2012; Kober, 2015;
National Research Council, 2012a). These practices are not restricted to
classroom environments; there is also emerging evidence suggesting that
programs outside classrooms, such as internships and undergraduate re-
search, can benefit college students from many backgrounds (Mayhew et
al., 2016; National Academies of Sciences, Engineering, and Medicine,
2017). In this section the committee presents examples of such practices and
reviews research demonstrating their effectiveness for supporting learning
and persistence, both generally and in STEM fields. These examples are not
intended to be exhaustive but, rather, descriptive.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

58 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

In the Classroom
Active Learning as a General Class of Evidence-Based Practices  There is
no generally agreed-upon definition of “active learning” in the research
literature, but there are characteristics that such approaches have in com-
mon. In this report, the committee uses the term “active learning” to refer
to that class of pedagogical practices that cognitively engage students in
building understanding at the highest levels of Bloom’s taxonomy (Bloom,
Krathwohl, and Masia, 1964; Anderson, Krathwohl, and Bloom, 2001). Ac-
tive learning instructional practices have been shown to improve students’
academic achievement both generally, across all fields of study (Mayhew et
al., 2016), and in STEM specifically (National Research Council, 2012a).
These practices include collaborative classroom activities, fast feedback
using classroom response systems (e.g., clickers), problem-based learning,
and peer instruction (Bonwell and Eison, 1991; Prince, 2004): see Box 3-1.
The core idea behind all active learning approaches is that learning requires
mental activity, which is more likely to occur when students are engaged
in activities or discussions focused on the content than when students are

BOX 3-1
Peer Instruction: An Example of Active Learning

Peer instruction has been successfully used to engage students in active


learning in introductory engineering and physics courses (Borrego et al., 2013;
Froyd et al., 2013; Henderson and Dancy, 2009; Mazur, 1997). In peer instruc-
tion, class sessions are broken into a series of 15- to 20-minute segments and
include rapid formative assessment (see Box 3-3, below). Each segment begins
with a short lecture on the topic of interest. The instructor then stops lecturing and
poses a short assessment question, referred to as a ConcepTest, designed to
expose common student difficulties in understanding a single concept. Students
think about the question, come up with their own answers, and then discuss their
responses for a few minutes with a small group of peers as they seek to reach
consensus on the correct answer (Mazur, 1997). The instructor then shares the
distribution of responses with the class and asks students to discuss their answers
with one or two students around them. Students respond again to the question.
The segment concludes with the instructor leading a wrap-up discussion. Based
on analysis of years of statistics on student performance, Crouch and Mazur
(2001) found that students taught with peer instruction have greater mastery of
conceptual reasoning and quantitative problem-solving skills than those in tradi-
tionally taught classes. More recent work by Lazry, Mazur, and Watkins (2008)
found similar improvements in knowledge and skills, as well as decreased attrition
in introductory physics courses, among 2-year college students taught with peer
instruction.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INCREASE STUDENTS’ MASTERY OF STEM CONCEPTS AND SKILLS 59

passively listening to an instructor lecture. There is ample evidence showing


that engaging students in active learning improves academic achievement
in comparison with traditional teaching methods. In an updated review
of research on how college affects students, Mayhew et al. (2016, p. 51)
reported:
. . . across active learning methods and disciplinary fields of study, the
weight of evidence has found students who actively engage in the learning
process gained greater subject matter competence and were more adept at
higher order thinking in the discipline than peers who were less actively
engaged in their learning.

Much of the evidence of the effectiveness of active learning approaches


is based on studies focusing on specific STEM disciplines, referred to as
discipline-based education research. This research has shown that active
learning increases students’ STEM content knowledge, conceptual under-
standing, and problem-solving skills (National Research Council, 2012a;
Faust and Paulson, 1998; Prince, 2004). Educational practices shown to
be effective include interactive lectures in which students are active par-
ticipants, collaborative learning activities, lecture-tutorial approaches, and
laboratory experiences that incorporate realistic scientific practices and the
use of technology (National Research Council, 2012a). In engineering, for
example, first-year courses that engage students in teams to solve real-world
engineering problems have been shown to increase student persistence in
the field (Fortenberry et al., 2007; Lichtenstein et al., 2014).
A recent meta-analysis of studies spanning STEM disciplines provides
some of the most conclusive evidence to date that active learning increases
student performance in science, engineering, and mathematics (Freeman
et al., 2014). Moreover, some research shows that, in comparison with
a traditional lecture course, students from traditionally underrepresented
groups taking a course with active learning methods are less likely to fail
or withdraw (Haak et al., 2011). Appropriate use of active learning and
formative assessment increases achievement and persistence for all students,
but particularly for traditionally underrepresented students (Freeman et
al., 2014). The use of evidence-based practices is especially important for
improving outcomes for students in the critical high-enrollment “gateway”
courses that are required for STEM majors (Gasiewski et al., 2012). Cur-
rently, these courses often act to discourage students from persisting in
STEM majors (President’s Council of Advisors on Science and Technology,
2012).

Formative Assessment  Rapid feedback to students and instructors on stu-


dents’ learning progress is another example of an evidence-based practice,
and it is also often a component of active learning instruction. Such feed-

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

60 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

back is provided through formative assessment. Formative assessments are


used to diagnose where a student is relative to learning goals and to inform
students and instructors of any actions needed to address learning gaps. In
contrast to summative assessment, which aims to evaluate student learn-
ing after an extended period of instruction and has stakes attached (e.g., a
grade, a degree), formative assessments are administered in the course of
instruction and have low or no stakes attached to them.
As with active learning, there is significant agreement in the research lit-
erature about the importance of formative assessment processes for improv-
ing students’ acquisition of STEM concepts and skills (National Research
Council, 2012a). The core idea is that students need high-quality feedback
about their learning in order to improve. Based on an extensive review of
the formative assessment literature, Black and Wiliam (1998) concluded
that the positive effect of formative assessment on student learning is
larger than for most other educational innovations. There are many ways
for instructors to use formative assessment. For example, in one study of
several interactive geoscience classrooms, the instructors lectured for 10–20
minutes and then posted a selected-response formative assessment on the
blackboard or screen. Based on their evaluation of student responses, the
instructors then either lead a whole-class discussion, briefly explained the
reason for the correct answer and continued lecturing, or asked students to
discuss the reasons for their answers with their neighbors (peer tutoring;
see Box 3-1). Students in these interactive classrooms showed a substantial
improvement in understanding of key geoscience concepts (McConnell et
al., 2006). Student response systems (clickers) are an increasingly popular
way for instructors to promote formative assessment in large-lecture intro-
ductory STEM courses, in both 2-year and 4-year institutions: see Box 3-2.
A recent study by Roediger and colleagues (2011) indicates that
low-stakes assessments can support learning whether or not the results
are not used for formative purposes (i.e., to guide changes in teaching
and learning). In three classroom-based experiments, the authors found
that frequent, low-stakes assessments provided students with practice
in trying to remember (retrieving) course material. The retrieval process
promoted learning and retention of course material.

Outside the Classroom


There is emerging evidence that programs, support services, and other
experiences outside the classroom—including advising, mentoring, intern-
ships, and undergraduate research experiences—can support students’
mastery of STEM concepts and skills, as well as persistence and positive
attitudes toward STEM learning (National Academies of Sciences, Engi-
neering, and Medicine, 2016, 2017). They are sometimes referred to as

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INCREASE STUDENTS’ MASTERY OF STEM CONCEPTS AND SKILLS 61

BOX 3-2
Formative Assessment in a 2-Year College Setting

When Jessica Smay took a position teaching geosciences and astronomy


at San Jose City College in California in 2006, she was already persuaded of the
value of ongoing formative assessment to support students’ learning. Especially
because of the college’s highly diverse student body, Smay wanted to further
refine some of her teaching strategies, including formative assessment.
One approach Smay used to determine how well her students were learning
was simply to listen to students’ reasoning as they discussed a clicker question
with a partner or worked in small groups on a tutorial. Smay would “walk around
the classroom and see how the students were talking about or answering the
questions—their thought processes,” she says. If a student seemed confused,
“I would say, ‘How did you get this answer?’ and they would talk me through it.”
Based on these discussions, Smay realized that in some cases she was expecting
her students to “make too big of a leap” in their progression toward more accurate
understanding, so she would revise a learning activity to provide students with
more scaffolding. Some of the student misconceptions that emerged during these
discussions also helped her to design better ConcepTests (see Box 3-1, above).

SOURCE: Interview with Jessica Smay reported in Kober (2015, p. 121).

“co-curricular” activities, but it is important to note that internships are


often part of the core curriculum in undergraduate engineering, required for
completion of an engineering degree. Below, we present examples of vari-
ous “outside the classroom” experiences that can be part of undergraduate
STEM education and review research on their effectiveness. These examples
overlap to some degree with a group of educational approaches referred to
as “high impact practices” (see Box 3-3). As in the previous section, these
examples are not intended to be exhaustive but, rather, descriptive.
In a review of evaluations of interventions specifically designed to en-
courage persistence and success in undergraduate STEM, Estrada (2014,
p. 5) found “emerging evidence that many programs result in participants
pursuing STEM careers at higher rates than those who do not partici-
pate in interventions.” Evaluation studies have found that summer bridge
programs (Packard, 2016; Strayhorn, 2011) and living-learning programs
(Brower and Inkelas, 2010) facilitate intended STEM majors’ successful
transition into college and persistence in STEM, particularly for women
and students of color. Internships, student professional groups, and peer
tutoring programs can also have a positive effect on STEM outcomes by
promoting STEM learning, expanding peer and professional networks, and

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

62 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

BOX 3-3
High-Impact Practices

A group of instructional approaches referred to as high-impact practices


(Kuh, 2008) overlaps somewhat with the examples of evidence-based STEM
educational practices discussed in this chapter. The National Survey of Student
Engagement (NSSE) asks first-year and senior students about the extent to which
they have participated in various types of classroom and out-of-classroom learn-
ing practices (see Chapter 6 for details of this survey). It also asks them to report
on their learning and development gains in areas such as analytical reasoning
and writing. Analyzing NSSE data, Kuh (2008) identified correlations between
participation in certain learning activities and self-reported gains in learning and
development. Based on these correlations, he identified 10 practices as “High-
Impact Practices” (HIPs) (Kuh, 2008):

  1. first-year seminars and experiences


  2. common intellectual experiences
  3. learning communities
  4. writing-intensive courses
  5. collaborative assignments and projects
  6. undergraduate research
  7. diversity/global learning
  8. service learning, and community-based learning
  9. internships, and
10. capstone courses and projects.

developing students’ scientific identity (National Academies of Sciences,


Engineering, and Medicine, 2016; Eagan, 2013). A recent review of the
research on undergraduate research experiences concluded that they are
beneficial for all students and help to validate disciplinary identity among
students from historically underrepresented groups (National Academies of
Sciences, Engineering, and Medicine, 2017).
Experiences outside the classroom can help all college students develop
a basic understanding of STEM concepts and processes, in addition to their
value for STEM majors. The higher education community has placed a re-
newed emphasis on the importance of developing such basic understanding
of STEM for all college students, including it among the goals of a liberal
arts education (Association of American Colleges & Universities (AAC&U),
2007; Savage, 2014). STEM experiences outside the classroom can develop
students’ knowledge of the physical and natural worlds, quantitative lit-
eracy, and critical thinking and analytical skills—all of which are among
what AAC&U calls essential learning outcomes for the 21st century. Many
colleges and universities have devised programs and activities (e.g., first-

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INCREASE STUDENTS’ MASTERY OF STEM CONCEPTS AND SKILLS 63

These practices are general descriptions of teaching and learning activities.


Instructors may implement them in various ways, resulting in wide variation in
their effectiveness for supporting students’ learning and retention. In addition,
they are based on students’ self-reports of learning gains. Self-report surveys
have well-known limitations, such as being subject to social desirability bias
(e.g., answering in ways that show oneself in the best light) and to other forms of
response bias (e.g., differences in how individuals interpret the meaning of rating
scales (see chapter 6 for further discussion). Although some subsequent studies
of high-impact practices have also relied on self-reported learning gains (e.g.,
Kuh and O’Donnell, 2013; Finley and McNair, 2013), a few studies of high-impact
practices have used more objective measures of student learning outcomes. For
example, analyzing data from the Wabash National Study of Liberal Arts Educa-
tion, Pascarella and colleagues (2014) found that participation in one practice—
diversity/global learning experience—was related to gains in critical thinking, as
measured by the standardized, 32-item critical thinking test from the Collegiate
Assessment of Academic Proficiency. In addition, Brownell and Swaner (2010)
conducted a review of peer-reviewed, published research on student outcomes
related to 5 of Kuh’s original 10 practices: first-year seminars, learning communi-
ties, service learning, undergraduate research, and capstone experiences. They
found that four of them—first-year seminars, learning communities, undergraduate
research, and service learning—showed some evidence of positive effects on a
range of student learning outcomes, including persistence, grades, graduation,
and development of such skills as civic engagement and critical thinking.

year seminars, sustained enrichment programs, intensive intersession expe-


riences) that require students to engage with scientific evidence and evaluate
scientific claims (see, e.g., Savage and Jude, 2014). For both STEM majors
and non-STEM majors, such experiences develop STEM competencies and
provide opportunities for students to apply these competencies to complex,
real-world problems (Savage and Jude, 2014; Estrada, 2014).

Advising  Advising relationships are intended to support students’ aca-


demic progress and degree completion and are important for all students,
especially those majoring in STEM. An effective advisor provides accurate
information about general education and degree requirements, guides stu-
dents through the academic planning process, and ensures that students
complete the administrative tasks necessary to move through the higher
education institution in a timely manner (Baker and Griffin, 2010; Drake,
2011; Pizzolato, 2008). Light (2004) suggested that good advising was an
important characteristic of a successful college experience.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

64 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Research suggests that the quality of advising influences student suc-


cess. For example, a study of more than 1,000 public university students
reported that students’ satisfaction with the quality of the advising they
received was positively related to retention to the second year, partly be-
cause it was associated with a higher grade point average during the first
year (Metzner, 1989). More generally, Pasacarella and Terenzini (2005)
found that “research consistently indicates that academic advising can play
a role in students’ decisions to persist and in their chances of graduating”
(Pascarella and Terenzini, 2005, p. 404); it was not clear whether advising
had a direct or indirect effect on students.
Some studies suggest that high-quality advising is particularly valuable
for 2-year college students. For example, Seidman (1991) found that 2-year
college students who received academic advising three times during their
first semester to discuss course planning and involvement opportunities
persisted at a rate 20 percent higher than those who participated only in the
first-year orientation program (Seidman, 1991). In another study, focusing
on full-time 2-year college students in California, Bahr (2008) found that
academic advising improved students’ success in completing developmental
mathematics and transferring to a 4-year institution. Bahr found that the
benefits of advising were greater for students underprepared in mathematics
than for those who were ready for college-level mathematics. Field research
in multiple community colleges suggests that they can best facilitate student
success by redesigning curriculum, instruction, and student supports around
coherent programs of study (Bailey, Jaggars, and Jenkins, 2015); high-
quality advising is a central element of this “guided pathways” approach
(see Chapter 5 for further discussion).
Although better research on the direct effects of academic advising on
student outcomes is needed, academic advising is consistently associated
with variables that predict student success—namely, student satisfaction
with the college experience, effective educational and career decision mak-
ing, student use of campus support services, student-faculty contact outside
the classroom, and student mentoring (Habley, Bloom, and Robbins, 2012).
The quality of advising is critically important for all students, but par-
ticularly for STEM majors. When advisors give misinformation regarding
rigid course sequences or career opportunities, research shows a link to
­attrition from STEM majors (Haag et al., 2007). For example, a student
who neglects to enroll in the appropriate mathematics course in a given
­semester might delay the completion of her degree by a semester or more
due to the nature of STEM course prerequisites. High-quality advising
enables students to make good academic decisions based on accurate in-
formation, contributing to the successful completion of the STEM degree
(Baker and Griffin, 2010).
Appropriate advising on STEM is important not only for entering stu-

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INCREASE STUDENTS’ MASTERY OF STEM CONCEPTS AND SKILLS 65

dents who intend to pursue STEM majors, but also for all students. Many
students transfer into a STEM major after initially focusing on another field
of study (Chen, 2013; National Academies of Sciences, Engineering, and
Medicine, 2016). Such transfers suggest that many students with interest
and ability in STEM would benefit from more guidance and information
about STEM programs and careers.

Mentoring  Effective mentoring practices can also support students’ suc-


cess in undergraduate STEM. Mentoring has been defined as a concept, a
process, a developmental experience, or a set of activities (Crisp and Cruz,
2009). It may involve formal or informal interactions that occur only briefly
or are sustained over time. Effective mentors can help students by bringing
together ideas from different contexts to promote deeper learning. Although
most studies report that mentoring has a positive effect on academic suc-
cess, the varying definitions of roles and interactions has made it difficult
to fully evaluate the impact of mentoring (Crisp and Cruz, 2009; National
Academies of Sciences, Engineering, and Medicine, 2017).
Often, an individual faculty member mentors a student, with ongoing
interactions centered on the student’s personal and professional develop-
ment (Baker and Griffin, 2010; Packard, 2016). Effective mentors of STEM
students go beyond information sharing; they provide psychosocial support
to students, assist students in building key STEM competencies (Packard,
2016), and act as a sounding board as students work through academic
and career decisions (Baker and Griffin, 2010). High-quality mentors also
use their resources (social capital) and their positions in the institution and
in their STEM fields to provide valuable experiences and opportunities to
help their mentees meet personal and career goals (Baker and Griffin, 2010;
Packard, 2016). Though mentoring requires large investments of faculty
time and effort, it is a valuable practice, with positive effects on the out-
comes of STEM majors, especially those from historically underrepresented
populations (Packard, 2016).
Mentoring may also be provided by peers, such as STEM majors en-
rolled in advanced classes. Such peer mentors typically receive guidance
from faculty to support first- and second-year students. Combining the
concept of peer mentoring with the special needs of historically underrep-
resented populations, Montana State University matches incoming under-
represented and/or first-generation students intending to major in STEM
with older underrepresented and/or first-generation STEM students who
are succeeding at the university.1 One recent study found that students who
received peer mentoring experienced increased satisfaction with and com-
mitment to, a STEM major (Holland, Major, and Orvis, 2012). Another

1 See http://www.montana.edu/empower/mentoring.html [November 2017].

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

66 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

study found that students who received peer mentoring reported increased
sense of belonging and science identity, as well as improved self-efficacy,
all factors that are important for increasing persistence of underrepresented
minorities in STEM (Trujillo et al., 2015). Still, a recent review noted that
research on undergraduate mentoring programs needs more rigorous re-
search designs (Gershenfeld, 2014).

Limited Use of Evidence-Based STEM Educational Practices


Despite the growing body of research supporting the effectiveness of
evidence-based educational practices in and outside the classroom, they
have not been widely implemented to date (National Research Council,
2012b; Weaver et al., 2016). For example, Eagan (2016) looked at teaching
practices among groups of faculty from the physical and natural sciences,
engineering, and mathematics, along with social sciences and humanities.
He found a persistent gap in the use of student-centered teaching techniques
between faculty in the natural and physical sciences and engineering and
those in the social sciences and the arts and humanities.
Recognizing the potential of evidence-based educational practices to
significantly increase students’ learning, federal policy makers (President’s
Council of Advisors on Science and Technology, 2012; National Science
and Technology Council, 2013) and disciplinary associations (e.g., Saxe
and Braddy, 2016) continue to recommend wider use of them. Timely, na-
tionally representative information on the extent to which these practices
are being used would help policy makers, associations, institutions, and
educators devise effective strategies to promote their use and thus increase
students’ learning and persistence in STEM.

Proposed Indicators
Given what is known about the value of evidence-based STEM educa-
tional practices and the relative lack of their widespread adoption, the com-
mittee proposes two indicators to monitor progress toward the objective of
using evidence-based practices in and outside of classrooms.

Indicator 1.1.1: Use of Evidence-Based Practices in Course Development


and Delivery
In the National Science and Technology Council’s 5-year strategic plan
for STEM education (2013), the first recommendation for strengthening
undergraduate education was to “identify and broaden implementation of
evidence-based instructional practices” (p. 29). Research shows that STEM
educators (faculty members, graduate student instructors, adjunct instruc-

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INCREASE STUDENTS’ MASTERY OF STEM CONCEPTS AND SKILLS 67

tors or others) are indeed aware of instructional alternatives to extensive


lecturing (Henderson, Dancy, and Niewiadomska-Bugaj, 2012) and that
such alternatives are being made available in forms more accessible to
educators (Kober, 2015). But knowing of these practices is not the same
as using them, especially if the departmental or institutional context does
not provide support for that use (Weaver et al., 2016; see further discus-
sion below). Surveys conducted within different STEM disciplines suggest
that educators have made only limited use of research-based approaches to
date (e.g., Henderson and Dancy, 2009). However, little is known about
the extent to which STEM educators nationally are drawing on research
to redesign and deliver their courses. Indicator 1.1.1 is designed to fill this
gap, measuring the extent to which all STEM instructors (tenured and
tenure-track faculty, part-time and adjunct faculty, instructors, and gradu-
ate student instructors) incorporate evidence-based educational practices in
course development and delivery.

Indicator 1.1.2: Use of Evidence-Based Practices Outside the Classroom


Educational experiences outside the classroom also support students’
mastery of STEM concepts and skills, retention in STEM, and, in some
cases, entry into STEM careers (Estrada, 2014; National Academies of
Sciences, Engineering, and Medicine, 2017). In particular, there is evidence
that undergraduate research experiences are beneficial for all students and
help underrepresented minority students develop identification with their
chosen STEM discipline (National Academies of Sciences, Engineering, and
Medicine, 2017; Laursen et al., 2010).

OBJECTIVE 1.2: EXISTENCE AND USE OF SUPPORTS


THAT HELP STEM INSTRUCTORS USE
EVIDENCE-BASED EDUCATIONAL PRACTICES

Importance of the Objective


Advancing adoption of evidence-based educational practices is diffi-
cult because of such barriers as little support from faculty members, other
instructors, and departments; few incentives for improved teaching; inap-
propriate classroom infrastructure; limited awareness of research-based in-
structional practices; and lack of time (Henderson, Beach, and Finkelstein,
2011; National Research Council, 2012a ; National Academies of Sciences,
Engineering, and Medicine, 2016). Departments are a critical unit for
change in undergraduate STEM since they represent not only individual
instructors’ values and aspirations, but also the whole curriculum, beyond
the individual courses (e.g., Wieman, Perkins, and Gilbert, 2010). Hence,

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

68 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

departmental and institutional supports, including professional develop-


ment, are essential to help instructors learn about and adopt evidence-based
educational practices.
Incorporating new approaches into established teaching practices is
a challenge for many instructors. And even when they try a new instruc-
tional method, many abandon it due to lack of support (Henderson and
Dancy, 2009; Henderson Dancy, and Niewiadomska-Bugaj, 2012; National
Research Council, 2012a). For example, Ebert-May and colleagues (2011)
conducted a survey and also videotaped direct observations of teaching
practices among faculty who had participated in workshops introducing
evidence-based teaching practices. Although 75 percent of the participants
indicated in response to the survey that they were using evidence-based
practices including student-centered and cooperative learning approaches,
analysis of the videotapes showed that only 25 percent had moved toward
these approaches, and 60 percent had not changed their teaching practices
at all.
If instructors are to make lasting positive changes to their pedagogy,
they often need instructional support, which can include time and re-
sources for professional development opportunities (e.g., through a cen-
ter for teaching and learning), mini-grants for instructional improvement,
and development of instructional facilities that support different types of
evidence-based educational practices. Support can also be provided by
external entities. For example, many professional societies offer programs
to introduce instructors to active learning strategies and support the use of
these strategies (Council of Scientific Society Presidents, 2013).
Research on instructional change has advanced in the past decade,
so more is now known about the factors that constrain movement to-
ward evidence-based STEM teaching (Smith, 2013). However, much less is
known about what supports adoption of evidence-based educational prac-
tices. Because there have been few efforts to systematically identify sources
of instructional support for the use of evidence-based instructional practices
and track how those resources are allocated by administrators and used by
instructors, research in this area is needed. The limited research available
indicates that the objective of existence and use of supports is complex
and multifaceted. For example, physical classroom and laboratory space
and technology infrastructure are likely to play a role in instructors’ use of
evidence-based practices. Although multiple indicators of multiple factors
may be needed to fully measure the status of this objective, the committee
proposes two indicators as a starting point, focusing on professional devel-
opment and support for course development and design.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INCREASE STUDENTS’ MASTERY OF STEM CONCEPTS AND SKILLS 69

Proposed Indicators

Indicator 1.2.1: Extent of Instructors’ Involvement in Professional


Development
One barrier to increased use of evidence-based educational practices is
a lack of awareness of such practices (Henderson, Beach, and Finkelstein,
2011). Professional development is perhaps the most frequent approach
to developing such awareness. Over the past decade, STEM disciplinary
societies and departments have begun offering professional development
programs and colleges and universities have been establishing centers for
teaching and learning to help instructors across all disciplines improve their
teaching practices (National Research Council, 2012a). At the same time,
discipline-based education research has advanced, identifying evidence-
based practices to incorporate in these growing professional development
programs.
Some evaluations of such programs suggest that instructors are more
likely to adopt new, evidence-based teaching practices when they participate
in professional development that includes three features (Henderson, Beach,
and Finkelstein, 2011):

• sustained efforts that last 4 weeks, one semester, or longer;


• feedback on instructional practice; and
• a deliberate focus on changing instructors’ conceptions about
teaching and learning.

A more recent evaluation of a professional development program in


the geosciences yielded positive but somewhat different results. Manduca
and colleagues (2017) evaluated “On the Cutting Edge,” a program that
included workshops and a website to share teaching resources, to determine
whether participation had led to use of evidence-based teaching practices.
The authors surveyed program participants in 2004, 2009, and 2012, ask-
ing about teaching practices, engagement in education research and scien-
tific research, and professional development related to teaching. In addition,
they directly observed teaching using the Reformed Teaching Observation
Protocol2 and conducted interviews to understand what aspects of the pro-
gram had supported change. Analysis of the survey data indicated that use
of evidence-based educational practices had become more common, espe-
cially among faculty who invested time in learning about teaching. Instruc-
tors who had participated in one or more workshops and regularly used

2 See http://www.public.asu.edu/~anton1/AssessArticles/Assessments/Biology%20Assessments/

RTOP%20Reference%20Manual.pdf [July 2017].

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

70 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

resources provided by the program’s Website were statistically more likely


to use evidence-based educational strategies. Respondents also reported
that learning about teaching, the availability of teaching resources, and in-
teractions with peers had supported changes in their teaching practice. The
authors suggest that even one-time participation in a workshop with peers
can lead to improved teaching by supporting a combination of affective and
cognitive learning outcomes.
Henderson, Beach, and Finkelstein (2011) emphasize instructors’ ex-
tended time in formal professional development programs as key to im-
plementation of evidence-based teaching practices, while Manduca and
colleagues (2017) emphasize instructors’ extended time in informal pro-
fessional development (learning about teaching through interactions with
peers, accessing the Website, etc.) as critical for change in teaching prac-
tices. But both groups of authors found that investing time in learning
about evidence-based teaching practices supports the implementation of
such practices: This finding contributed to the committee’s proposed indica-
tor on the extent of instructors’ involvement in professional development.
More and better data about the nature of instructional supports and
their use are needed. Beach and colleagues (2016) recently pointed out
that faculty development is entering an age of evidence that will require
faculty development units (e.g., campus teaching and learning centers) to
collect more data about development services and make better use of the
data for program improvement. In response to this push for evidence, it is
possible that data about instructors’ use of faculty development units and
the resources available from such units is being collected at the institution
level, but the committee knows of no efforts to aggregate those data across
institutions.

Indicator 1.2.2: Availability of Support or Incentives for Evidence-Based


Course Development or Course Redesign
Course development and course redesign are time-intensive activities
(Dolan et al., 2016). For example, when participants in the National Acad-
emies Summer Institutes on Undergraduate Education3 were surveyed after
returning to their home institutions, they frequently commented that it took
3 or more years of experimenting with learner-centered teaching strategies
before they could implement those strategies effectively (Pfund et al., 2009). 
Studies conducted at research-intensive universities that may not always

3 The week-long summer institutes focusing on life sciences education engaged participants in

active learning and formative assessment, to help them both understand and experience these
evidence-based educational practices. See http://www.hhmi.org/news/hhmi-helps-summer-
institute-expand-regional-sites [September 2017].

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INCREASE STUDENTS’ MASTERY OF STEM CONCEPTS AND SKILLS 71

value teaching have found that instructors’ course development and rede-
sign can sometimes be accelerated when they receive appropriate support,
such as instructional resources, establishment of faculty learning communi-
ties (see, e.g., Tewksbury and MacDonald, 2005) and teaching and learning
centers, and help from trained instructional assistants (see. e.g., Wieman,
Perkins, and Gilbert, 2010). Faculty learning communities have also been
developed at 2-year colleges (Sipple and Lightner, 2013).
Support for the time instructors need to develop or redesign a course
usually comes from the department or institution in the form of course buy-
outs during the academic year (Dolan et al., 2016). Financial support can
come as additional compensation during the academic year as “overload”
or payment during unfunded summer months. Instructional resources can
include content repositories, course templates, assessment/learning objec-
tive alignment tools, and successful course design models. Support can also
come from different types of people, including content developers (col-
laborating faculty, co-instructors, postdoctoral fellows, graduate students,
undergraduate students), experts in pedagogy, assessment and instructional
technology, and other instructors in peer learning communities. All of these
various forms of support are helpful, if not essential, but they require spe-
cific department, college, and institutional cultures that routinely demon-
strate, in both words and actions, that evidence-based course development
and redesign are valued.
Developing a course is not a one-time activity; it is an ongoing explo-
ration and evolution of how to engage and help all students have the op-
portunity to learn (Weaver et al., 2016). Targeted experimentation, whether
developed locally or as a replication of published work, signals an approach
that values ongoing, or continuous, educational improvement (see further
discussion below). Full engagement with evidence-based course develop-
ment or redesign forces examination of learning objectives, instructional
activities and approaches, assessment of student learning outcomes, con-
nections with preceding and post courses, and interdisciplinary connections.
The work fosters instructional experimentation that activates engagement
in the scholarship of teaching and learning in a manner closely linked to the
process STEM faculty members use in their own research.

OBJECTIVE 1.3: AN INSTITUTIONAL CULTURE THAT


VALUES UNDERGRADUATE STEM INSTRUCTION

Importance of the Objective


The behavior of individuals in an organization is based on both their
individual characteristics and the characteristics of their organizational
contexts (Henderson and Dancy, 2007; Henderson, Beach, and Finkelstein,

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

72 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

2011). In academic contexts, organizational culture includes both easily


observable surface-level features (e.g., the way people dress, official policies,
classroom layouts) and deeper, often unconscious values that groups de-
velop as they respond to external environmental challenges and the need for
internal integration (Schein, 2004). The deeper-level shared, tacit assump-
tions that come to be taken for granted partly determine group members’
daily behavior (Schein, 2010). Institutional values are what those in the
organization perceive as important. For example, if a department’s culture
values research more highly than teaching, then it is unlikely to provide
support for evidence-based course development or redesign.
Because culture is dynamic, continually evolving, and includes unstated,
unconscious assumptions, it is very difficult to measure. Nevertheless, proxy
measures used in the research literature suggest that institutional culture
is related to students’ persistence in STEM majors. For example, Griffith
(2010) found that persistence was explained not only by students’ cumu-
lative grade-point averages, but also by institutions’ ratio of undergradu-
ates to graduate students or its share of funding going to undergraduate
education relative to research (proxies for a commitment to undergraduate
education). Titus (2004) found that students’ persistence was influenced by
institutional contexts, concluding that “persistence is positively influenced
by student academic background, college academic performance, involve-
ment, and institutional commitment (Titus, 2004, p. 692).
A growing body of research indicates that many dimensions of current
departmental and institutional cultures in higher education pose barriers to
educators’ adoption of evidence-based educational practices (e.g., Dolan et
al., 2016; Elrod and Kezar, 2015, 2016a, 2016b). For example, allowing
each individual instructor full control over his or her course, including
learning outcomes, a well-established norm in some STEM departments,
can cause instructors to resist working with colleagues to establish shared
learning goals for core courses, a process that is essential for improving
teaching and learning. One recent analysis found that the University of
Colorado Boulder science education initiative made little progress in shift-
ing ownership of course content from individual instructors to the depart-
mental level because of this dimension of departmental culture (Dolan et
al., 2016).
Given the challenge of measuring organizational culture, the committee
focused on a closely related construct, organizational climate. Organiza-
tional climate is defined as the shared perceptions of people in an organiza-
tion about specific and identifiable practices and policies that represent the
“surface features” of organizational culture (Denison, 1996; Peterson and
Spencer, 1990; Schneider, 1975; Schneider and Reichers, 1983; Schneider,
Ehrhart, and Macey, 2013). Since organizational climate can be more di-
rectly measured than culture, it is a more useful construct for the purpose

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INCREASE STUDENTS’ MASTERY OF STEM CONCEPTS AND SKILLS 73

of developing indicators. In addition, organizational climate is considered


more malleable than culture, changing in response to evolving strategic
imperatives, leadership messages, and policies and procedures (Kozlowski
and Ilgen, 2006).
One example comes from Michigan State University. With leader-
ship and funding from the provost and assistance from the Association of
American Universities Undergraduate STEM Initiative, the university devel-
oped new policies and practices for introductory genetics. Previously, no
single college or department had “owned” the course. A new management
committee developed a set of common learning objectives and assumed
responsibility for assigning instructors. Through this process, instructors
developed new shared perceptions of course goals and instructional ap-
proaches (i.e., institutional climate) and increased coordination of instruc-
tional and assessment practices. Follow-up assessment showed a substantial
increase in student performance relative to the prior format of introductory
genetics (Association of American Universities, 2017).
Developing good indicators of climate is important, because although
nearly every 4-year college and university has a mission statement stating
that it values high quality teaching (i.e., surface features of the organiza-
tional culture), a closer look at institutional structures and the behavior of
individuals within particular types of institutions often suggests otherwise.
One of the most direct ways to understand climate is to examine not only
the written policies of an organization, but also the actual practices. Thus,
the committee proposes two indicators related to institutional practices.

Proposed Indicators

Indicator 1.3.1: Use of Valid Measures of Teaching Effectiveness


This proposed indicator would focus on how instructors’ performance
is measured. It is well known that monitoring and feedback can lead to
significant improvements in organizational and individual learning and
performance (e.g., Harry and Schroeder, 2005; National Research Council,
2001), including performance in undergraduate STEM teaching. However,
the way this monitoring and feedback is implemented is crucial. In particu-
lar, the data sources used shape what the institution pays attention to, and
the overall integrity of the measurement and feedback process demonstrates
the importance of improvement in teaching and learning to the institution.
Student evaluations of teaching are the most common method institu-
tions use to assess teaching effectiveness (Berk, 2005; Henderson et al.,
2014) for such purposes as feedback and improvement or to inform person-
nel decisions (e.g., promotion and tenure). However, there is disagreement
in the literature about what student evaluations actually measure (e.g.,

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

74 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Ory and Ryan, 2001), and there are many who argue that typical student
evaluations are not a valid measure of teaching effectiveness (e.g., Emery,
Kramer, and Tian, 2003; Zabaleta, 2007). Furthermore, there is nearly
universal agreement that high-quality assessment of teaching effectiveness
requires multiple measures (e.g., Anderson et al., 2011; Chism and Stanley,
1999). An interview-based study with 72 physics instructors (Henderson
et al., 2014, p. 16) concluded: “. . . both instructors and institutions use a
limited repertoire of the possible assessment methods. Both groups would
benefit from including additional methods.”

Indicator 1.3.2: Consideration of Evidence-Based Teaching in Personnel


Decisions by Departments and Institutions
As noted above, individual instructors’ decisions about teaching prac-
tices are influenced by departmental and institutional cultures and contexts
that may facilitate or discourage use of evidence-based educational prac-
tices (Austin, 2011). Reward and promotion policies are concrete mani-
festations of culture, and the cultures and policies of research universities
have traditionally rewarded research more highly than teaching (see, e.g.,
Fairweather, 2005; Fairweather and Beach, 2002), with little or no atten-
tion to the quality of teaching.
Recent experience in institutional and multi-institutional STEM reform
efforts indicates that a lack of rewards for evidence-based teaching discour-
ages wider use. For example, a study of 11 California campuses under­
taking institutional-level STEM education reform identified inadequate
incentives and rewards for faculty participation in STEM reform projects as
one of several common barriers to change (Elrod and Kezar, 2015, 2016a,
2016b). In another example, tenure-track instructors participating in exten-
sive, weekly professional development workshops as part of a university-
wide course redesign project at Purdue University reported that there was
no incentive to spend much time on course redesign, especially if it reduced
time for research, because promotion and tenure decisions would be based
entirely on research productivity (Parker, Adedokun, and Weaver, 2016).
A 2014 survey of instructors at eight research universities participating
in the Association of American Universities Undergraduate STEM Initiative
revealed concerns that evidence-based teaching activities would not be rec-
ognized and rewarded in personnel evaluations (Miller and Trapani, 2016).
Among the 1,000 respondents (a 37% response rate), about 60 percent
were tenured and tenure-track faculty, 26 percent were graduate students,
and the rest were other types of instructors. On average, the respondents
agreed that teaching was important to administrators, but they were less
likely to agree that effective teaching played a role in their annual perfor-

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INCREASE STUDENTS’ MASTERY OF STEM CONCEPTS AND SKILLS 75

mance review and salary, posing a challenge to adoption of evidence-based


teaching practices.
Case studies suggest that consideration of evidence-based teaching
practices in personnel decisions changes the academic climate toward more
positive views about such practices. For example, Anderson and colleagues
(2011) and Wieman, Perkins, and Gilbert (2010) found that requiring
faculty to demonstrate excellence in teaching for promotion, while also
providing adequate support and structures to develop faculty teaching
ability, changed departmental climates that previously did not support un-
dergraduate STEM education.
A climate that values undergraduate STEM education would be ex-
pected to reward and encourage individuals to seek out and make use of
current knowledge related to teaching and learning. Such a climate would
reflect a shared understanding of the importance of instructors who know
about this evidence base and build their teaching practices on it. In such a
supportive climate, administrators would use validated measures other than
typical student evaluations to not only measure, but also reward, instruc-
tors’ use of evidence-based teaching practices.
Because administrators and promotion and tenure committees must
consider many factors when making personnel decisions, the committee did
not specify how they should weigh information provided by the use of vali-
dated measures of teaching quality. The committee thinks that simply the
fact that it is a factor in personnel decisions would be significant evidence
of a climate that values undergraduate STEM education.

OBJECTIVE 1.4: CONTINUOUS IMPROVEMENT


IN STEM TEACHING AND LEARNING

Importance of the Objective


Just as students’ mastery of STEM concepts and skills is supported by
ongoing formative assessment and rapid feedback, instructors’ work on
course redesign and implementation is supported by ongoing formative
and summative assessment of student learning to determine which teaching
approaches are most effective and thus inform continued course improve-
ment. At the same time, department-level improvement in STEM teaching
and learning can be supported by instructors’ collaborative work to develop
common learning goals for all students and engage in ongoing evaluations
of students’ progress toward those goals in order to guide continued im-
provement. At the institutional level, campus-wide commitment to ongoing
organizational learning and collection and analysis of data on program
effectiveness is critical to facilitate the systemic reform of undergraduate
STEM education that increases students’ mastery of STEM concepts and

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

76 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

skills (Elrod and Kezar, 2015, 2016b). This process of ongoing evaluation
and improvement is referred to as continuous improvement: see Box 3-4.
Institutional-level improvement efforts are essential when attempting
nationwide improvement in undergraduate STEM education. Although
most institutions are engaged in multiple quality improvement efforts in
different departments, schools, and classrooms, they are often disconnected,
rather than linked for systemic continuous improvement. Therefore, the
committee sought to identify examples and evidence of 2-year and 4-year
institutions that are engaged in coordinated continuous quality improve-
ment efforts. Because students directly experience courses and programs
of study, the committee focused on aspects of courses and programs that
signal the continuous improvement process of clearly articulated goals with

BOX 3-4
Continuous Improvement

The concept of continuous improvement originally developed in manufac-


turing, as growing global competition in the 1970s and 1980s drove companies
around the world to seek improvement in productivity and product quality by
focusing on process improvement. In a synthesis of various definitions of con-
tinuous improvement, Jha, Noori, and Michela (1966) concluded that it included
an emphasis on the customer and that it did not focus on continual change but,
rather, on evaluating the outcomes of a change and then using the information to
guide actions to improve a process. Continuous improvement has become ubiq-
uitous as businesses have adopted a variety of methodologies (such as quality
improvement, Six Sigma, Lean Manufacturing) to increase efficiency and profits.
The public sector has also seen increased interest in the approach, but for
different reasons, as discussed by Fryer, Antony, and Douglas (2007). In public
agencies, the motivation is not to increase profits but to respond to increasing
demands for performance with static or diminishing funds. Public organizations
seek to provide the “best value” while responding to calls for increased account-
ability, transparency, and quality. Despite widespread interest, successful adoption
of continuous improvement in education has been elusive. Fryer, Antony, and
Douglas (2007) attributed this in part to the organization of K–12 schools and
districts, where work is often performed in silos, policy makers demand quick
improvements in student learning, and data are not provided frequently or quickly
enough to meaningfully inform and change practice. As a result, poor student
learning outcomes are often viewed as failures of individual teachers, schools, or
students, rather than as a by-product of a misaligned system (Fryer, Antony, and
Douglas, 2007). As a result, continuous improvement is less prevalent in states,
districts, and schools than is the case in other industries (Park et al., 2013). The
picture is similar in higher education.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INCREASE STUDENTS’ MASTERY OF STEM CONCEPTS AND SKILLS 77

aligned assessment leading to outcomes-based action aimed at improving


student success.
At the classroom and department levels, for example, the Science
Education Initiative at the University of Colorado used a backward design
approach to course transformation. Faculty were asked to define specific,
measurable learning goals as the first step toward redesigning courses,
along with conceptual assessments to measure student progress and re-
vise instruction accordingly. Whenever possible, learning goals were estab-
lished through faculty consensus in each department (Wieman, Perkins,
and Gilbert, 2010).
At the institutional level, Bradforth and colleagues (2015, p. 283)
observed: “Universities accumulate volumes of longitudinal data that have
been underused in assessments of student learning and degree progress.”
They noted that a few universities have integrated such datasets to create
centralized, campus-wide analytics for use in measuring student progress
and guiding improvements in teaching and support services. In one example
of such integration, the iAMSTEM HUB at the University of California,
Davis, offers tools to visualize student pathways through courses and pro-
grams. At its most basic level, the ribbon-flow tool informs recruitment and
retention efforts by visualizing the majors students start with as freshmen
and the majors they complete as seniors.

Challenges of Measuring Continuous Improvement


After considering alternative approaches to measuring continuous im-
provement in STEM teaching and learning, the committee did not pro-
pose specific indicators for this objective. The first step in continuous
improvement—establishing clearly articulated student learning goals and
assessments of students’ progress toward those goals—could potentially be
measured. The importance of this step is supported by a large and grow-
ing body of theory and empirical evidence: see Box 3-5. Institution- or
­department-level measures of the percentage of STEM programs or indi­
vidual STEM courses that have established learning goals and aligned
assessments provide some information about continuous improvement.
For example, the PULSE assessment rubric (Brancaccio-Taras et. al, 2016)
addresses the degree to which programs have developed and employed cur-
ricular and course learning goals/objectives for students along with assess-
ments that are aligned with the learning outcomes desired for students at
both the course and whole curriculum level. The rubric includes two major
rating categories, Course-Level Assessment and Program-Level Assessment.
The majority of criteria included in this life science focused PULSE rubric
are broadly applicable to the broad range of STEM disciplines. In addition,
some higher education accrediting bodies require institutions to report on

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

78 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

BOX 3-5
Establishing Learning Goals and Assessing Student Progress

The process of creating a new course, or adapting an existing one, to focus


on clearly articulated learning goals, with aligned assessments, is a widely ac-
cepted best practice in education (e.g., National Research Council, 2012b), but is
sometimes overlooked in traditional introductory STEM “service” courses, which
tend to focus on coverage rather than conceptual understanding. Casagrand and
Semsar (2017, p. 199) recommend:

strong alignment of the course goals, formative assessments, and evaluative as-
sessments in [a] course. Because assessments inform students of what they need to
know and do, new learning opportunities need to be aligned well with assessments
to be successful.

Simon and Taylor (2009) found that “explicit learning goals provide a valuable
aid to guide students in their learning” and suggested best practices for the use
of learning goals.
Engineers have long used an objectives- or outcomes-focused design pro-
cess to solve a variety of human problems. The translation of these basic design
principles to solve learning and teaching problems led to early work such as basic
principles of curriculum and instruction (Tyler, 1949), Bloom and colleagues’s work
on evaluating student learning (Bloom, Hastings, and Madaus, 1971), “backward
design” (Wiggins and McTighe, 2005), and the integrated course design approach
described by Fink (2003). These approaches, typically used to redesign single
courses, can also be used to connect courses across a curriculum to constitute a
program of study and connect programs across an entire institution.
These structured approaches to course, curriculum, and program design
lend themselves to iterative testing and improvement on the basis of evidence
as suggested by Fulcher and colleagues (2017). These authors suggest that in-
structors identify specific student learning outcomes and gather data on students’
knowledge, skills, or abilities related to the targeted outcomes before and after
any changes in instruction, using assessments that yield reliable and valid scores.

student learning goals and assessment results. For example, the Senior Col-
lege and University Commission of the Western Association of Schools and
Colleges asks institutions to clearly state student learning outcomes and
standards of performance at the course, program, and institution level and
to engage faculty in developing and widely sharing these student learning
outcomes.4

4 See https://www.wscuc.org/resources/handbook-accreditation-2013/part-ii-core-commitments-

and-standards-accreditation/wasc-standards-accreditation-2013/standard-2-achieving-educational-
objectives-through-core-functions [November 2017].

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INCREASE STUDENTS’ MASTERY OF STEM CONCEPTS AND SKILLS 79

However, the steps in continuous improvement that follow the es-


tablishment of clearly articulated learning goals and assessing students’
progress toward those goals are much more difficult to measure. Analyzing
assessment data to identify and understand the causes of student learning
gaps and developing and implementing strategies to address the gaps are
complex, multilevel processes. Research suggests that the use of assessment
data is most effective for improving student learning when it involves—
and engages the support of—multiple stakeholders, including instructors,
administrators, institutional research offices, and students (Austin, 2011;
Blaich and Wise, 2010). Therefore, strategies to address student learning
gaps may include not only the actions of individual instructors to develop
or redesign their courses, but also departmental or institutional policies
that support and encourage instructors in carrying out this work, such as
those measured in Indicator 1.2.2 above. Continuous improvement also
includes ongoing evaluation to monitor the effects of the different strategies
that may be adopted. Given the complexity and variety of the actions that
may be taken at different levels of the institution, the committee does not
propose a specific indicator of continuous improvement in STEM teaching
and learning. Further research is needed to more clearly conceptualize the
meaning of “continuous improvement in STEM teaching and learning” and
define all of its key components, before indicators can be developed.

REFERENCES
Anderson, L.W., Krathwohl, D.R., and Bloom, B.S. (2001). A Taxonomy for Learning, Teach-
ing, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. New
York: Longman.
Anderson, W.A., Banerjee, U., Drennan, C.L., Elgin, S.C.R., Epstein, I.R., Handelsman, J.,
Hatfull, G.F., Losick, R., O’Dowd, D.K., Olivera, B.M., Strobel, S.A., Walker, G.C., and
Warner, I.M. (2011). Changing the culture of science education at research universities.
Science, 331(6014), 152–153.
Arum, R., Roksa, J., and Cook, A. (2016). Improving Quality in American Higher Education:
Learning Outcomes and Assessments for the 21st Century. Hoboken, NJ: John Wiley
& Sons.
Association of American Colleges & Universities. (2007). College Learning for the New
Global Century. Washington, DC: Author. Available: https://www.aacu.org/sites/default/
files/files/LEAP/GlobalCentury_final.pdf [June 2016].
Association of American Universities. (2017). Progress Toward Achieving Systemic Change:
A Five-Year Status Report on the AAU Undergraduate STEM Education Initiative.
Washington, DC. Available: https://www.aau.edu/sites/default/files/AAU-Files/STEM-
Education-Initiative/STEM-Status-Report.pdf [October 2017].
Austin, A. (2011). Promoting Evidence-Based Change in Undergraduate Science Educa-
tion. Paper commissioned by the Board on Science Education. Available: http://sites.
nationalacademies.org/cs/groups/dbassesite/documents/webpage/dbasse_072578.pdf
[June 2016].

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

80 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Bahr, P.R. (2008). Cooling out in the community college: What is the effect of academic ad-
vising on students’ chances of success? Research in Higher Education, 49(8), 704–732.
Bailey, T., Jaggars, S.S., and Jenkins, D. (2015). What We Know About Guided Pathways.
New York: Columbia University, Teachers College, Community College Research Center.
Baker, V.L., and Griffin, K.A. (2010). Beyond mentoring and advising: Toward understand-
ing the role of faculty “developers” in student success. About Campus: Enhancing the
Student Learning Experience, 14(6), 2–8.
Beach, A.L., Sorcinelli, M.D., Austin, A.E., and Rivard, J.K. (2016). Faculty Development in
the Age of Evidence: Current Practices, Future Imperatives. Sterling, VA: Stylus.
Berk, R.A. (2005). Survey of 12 strategies for measuring teaching effectiveness. International
Journal on Teaching and Learning in Higher Education, 17(1), 48–62.
Black, P., and Wiliam, D. (1998). Inside the black box: Raising standards through classroom
assessment. Phi Delta Kappan, 80(2), 139–148. Available: www.pdkint1.org/kappan/
kb1a9810.htm [July, 2017].
Blaich, C.F., and Wise, K.S. (2010). Moving from assessment to institutional improvement.
New Directions for Institutional Research, 2010(S2), 67–78.
Bloom, B. S., Krathwohl, D.R., and Masia, B.B. (1964). Taxonomy of Educational Objectives.
1. Cognitive Domain. New York: Longman.
Bloom, B.S., Hastings, J.H., and Madaus, G.F (1971). Handbook on Formative and Summa-
tive Evaluation of Student Learning. New York: McGraw-Hill.
Bonwell, C., and Eison, J. (1991). Active Learning: Creating Excitement in the Classroom
(ASHE-ERIC Higher Education Report No. 1). Washington, DC: George Washington
University. Available: http://www.ed.gov/databases/ERIC_Digests/ed340272.html [July
2017].
Borrego, M., Cutler, S., Prince, M., Henderson, C., and Froyd, J.E. (2013). Fidelity of im-
plementation of Research-Based Instructional Strategies (RBIS) in engineering science
courses. Journal of Engineering Education, 102(3), 394–425. doi:10.1002/jee.20020.
Bradforth, S.E., Miller, E.R., Dichtel, W.R., Leibovich, A.K., Feig, A.L., Martin, J.D., Bjorkman,
K.S., Schultz, Z.D., and Smith, T.L. (2015). Improve undergraduate science education.
Nature, 523, 282–284. Available: https://www.nature.com/polopoly_fs/1.17954!/menu/
main/topColumns/topLeftColumn/pdf/523282a.pdf [May 2017].
Brancaccio-Taras, L., Pape-Lindstrom, P., Peteroy-Kelly, M., Aguirre, K., Awong-Taylor, J.,
Balser, R., Cahill, M.J., Frey, R.G., Jack, R., Kelrick, M., Marley, K., Miller, K.G.,
Osgood, M., Romano, S., Uzman, J.A., and Zhao, J. (2016). The PULSE Vision &
Change Rubrics, version 1.0: A valid and equitable tool to measure transformation of life
sciences departments at all institution types. CBE-Life Sciences Education, 15(4), ar60.
Available: http://www.lifescied.org/content/15/4/ar60.full [March 2017].
Brewer, C., and Smith, D. (Eds.). (2011). Vision and Change in Undergraduate Biology Educa-
tion. Washington, DC: American Association for the Advancement of Science.
Brower, A.M., and Inkelas, K.K. (2010). Living-learning programs: One high-impact educa-
tional practice we know a lot about. Liberal Education, 96(2), 36–43.
Brownell, J.E., and Swaner, L.E. (2010). Five High-Impact Practices: Research on Learning
Outcomes, Completion, and Quality. Washington, DC: Association of American Colleges
& Universities.
Casagrand, J., and Semsar, K. (2017). Redesigning a course to help students achieve higher-
order cognitive thinking skills: From goals and mechanics to student outcomes. Advances
in Physiology Education, 41(2), 194–202. doi:10.1152/advan.00102.2016.
Chen, X. (2013). STEM Attrition: College Students’ Paths Into and Out of STEM Fields.
Washington, DC: National Center for Education Statistics, Institute of Education Sci-
ences, U.S. Department of Education.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INCREASE STUDENTS’ MASTERY OF STEM CONCEPTS AND SKILLS 81

Chism, N.V.N., and Stanley, C.A. (1999). Peer Review of Teaching: A Sourcebook. Bolton,
MA: Anker.
Council of Scientific Society Presidents. (2013). The Role of Scientific Societies in STEM
Faculty Workshops. Available: http://www.aapt.org/Conferences/newfaculty/upload/
STEM_REPORT-2.pdf [July 2017].
Crouch, C.H., and Mazur, E. (2001). Peer instruction: Ten years of experience and results.
American Journal of Physics, 69(9), 970–977.
Crisp, G., and Cruz, I. (2009). Mentoring college students: A critical review of the literature
between 1990 and 2007. Research in Higher Education, 50(6), 525–545.
Denison, D.R. (1996). What is the difference between organizational culture and organi-
zational climate? A native’s point of view on a decade of paradigm wars. Academy of
Management Review, 21(3), 619–654.
Dolan, E.L., Lepage, G.P., Peacock, S.M., Simmons, E.H., Sweeder, R., and Wieman, C.
(2016). Improving Undergraduate STEM at Research Universities: A Collection of Case
Studies. Tucson, AZ: Research Corporation for Science Advancement. Available: https://
www.aau.edu/sites/default/files/STEM%20Scholarship/RCSA2016.pdf [June 2017].
Drake, J.K. (2011). The role of academic advising in student retention and persistence. About
Campus, 16(3), 8–12.
Eagan, M.K. (2013). Understanding Undergraduate Interventions in STEM: Insights from a
National Study. Presented to the Committee on Barriers and Opportunities in Complet-
ing 2- and 4-Year STEM Degrees. Available: http://sites.nationalacademies.org/cs/groups/
dbassesite/documents/webpage/dbasse_085900.pdf [July 2017].
Eagan, K. (2016). Becoming More Student-Centered? An Examination of Faculty Teaching
Practices Across STEM and Non-STEM Disciplines Between 2004 and 2014. A report
prepared for the Alfred P. Sloan Foundation. Available: https://sloan.org/storage/app/
media/files/STEM_Higher_Ed/STEM_Faculty_Teaching_Practices.pdf [May 2017].
Ebert-May, D., Derting, T.L., Hodder, J., Momsen, J.L., Long, T.M., and Jardeleza, S.E.
(2011). What we say is not what we do: Effective evaluation of faculty professional
development programs. Bioscience, 61(7), 550–558.
Elrod, S., and Kezar, A. (2015). Increasing Student Success in STEM: A Guide to Systemic
Institutional Change. Washington, DC: Association of American Colleges & Universities.
Available: https://www.aacu.org/peerreview/2015/spring/elrod-kezar [May 2017].
Elrod, S., and Kezar, A. (2016a). Increasing Student Success in STEM: A Guide to Systemic
Institutional Change. Washington, DC: Association of American Colleges & Universities.
Elrod, S., and Kezar, A. (2016b). Increasing student success in STEM: An overview of a new
guide to systemic institutional change. In G.C. Weaver, W.D. Burgess, A.L. Childress,
and L. Slakey (Eds.), Transforming Institutions: 21st Century STEM Education. West
Lafayette, IN: Purdue University Press.
Emery, C.R., Kramer, T.R., and Tian, R.G. (2003). Return to academic standards: A critique
of student evaluations of teaching effectiveness. Quality Assurance in Education, 11(1),
37–46.
Estrada, M. (2014). Ingredients for Improving the Culture of STEM Degree Attainment with
Cocurricular Supports for Underrepresented Minority Students. Paper prepared for the
Committee on Barriers and Opportunities in Completing 2- and 4-Year STEM Degrees.
Available: http://sites.nationalacademies.org/cs/groups/dbassesite/documents/webpage/
dbasse_088832.pdf [July 2017].
Fairweather, J. (2005). Beyond the rhetoric: Trends in the relative value of teaching and re-
search in faculty salaries. Journal of Higher Education, 76(4), 401–422.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

82 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Fairweather, J. (2012). Linking Evidence and Promising Practices in Science, Technology,


Engineering, and Mathematics (STEM) Undergraduate Education. Available: http://
sites.nationalacademies.org/cs/groups/dbassesite/documents/webpage/dbasse_072637.pdf
[June 2016].
Fairweather, J., and Beach, A. (2002). Variation in faculty work within research universi-
ties: Implications for state and institutional policy. Review of Higher Education, 26(1),
97–115.
Faust, J.L., and Paulson, D.R. (1998). Active learning in the college classroom. Journal on
Excellence in College Teaching, 9(2), 3–24.
Fink, L.D. (2003). Creating Significant Learning Experiences: An Integrated Approach to
Designing College Courses. Hoboken, NJ: John Wiley & Sons. Available: http://www.
unl.edu/philosophy/[L._Dee_Fink]_Creating_Significant_Learning_Experi(BookZZ.org).
pdf [June 2017].
Finley, A., and McNair, T. (2013). Assessing Underserved Students’ Engagement in High-
Impact Practices. Washington, DC: Association of American Colleges & Universities.
Fortenberry, N.L., Sullivan, J.F., Jordan, P.N., and Knight, D.W. (2007). Engineering education
research aids instruction. Science, 317(5842), 1175–1176. Available: http://itll.colorado.
edu/images/uploads/about_us/publications/Papers/SCIENCE%20PUBLISHED%20
VERSION%202007Aug31.pdf [February 2016].
Freeman, S., Eddy, S.L., McDonough, M., Smith, M.K., Okoroafor, N., Jordt, H., and
Wenderoth, M.P. (2014). Active learning increases student performance in science,
engineering, and mathematics. Proceedings of the National Academy of Sciences of
the United States of America, 111(23), 8410–8415. Available: http://www.pnas.org/
content/111/23/8410.full [July 2016].
Froyd, J.E., Borrego, M., Cutler, S., Henderson, C., and Prince, M.J. (2013). Estimates of
use of research-based instructional strategies in core electrical or computer engineering
courses. IEEE Transactions on Education, 56(4), 393–399.
Fryer, K.J., Antony, J., and Douglas, A. (2007). Critical success factors of continuous improve-
ment in the public sector: A literature review and some key findings. The TQM Magazine,
19(5), 497–517. doi: 10.1108/09544780710817900.
Fulcher, K.H., Smith, K.L., Sanchez, E.R.H., Ames, A.J., and Meixner, C. (2017). Return of the
pig: Standards for learning improvement. Research & Practice in Assessment, 11, 10–40.
Gasiewski, J.A., Eagan, M.K., Garcia, G.A., Hurtado, S., and Chang, M.J. (2012). From gate-
keeping to engagement: A multicontextual, mixed method study of student academic en-
gagement in introductory STEM courses. Research in Higher Education, 53(2), 229–261.
Gershenfeld, S. (2014). A review of undergraduate mentoring programs. Review of Educa-
tional Research, 84(3), 365–391.
Griffith, A.L. (2010). Persistence of women and minorities in STEM field majors: Is it the
school that matters? Economics of Education Review, 29(6), 911–922.
Haag, S., Hubele, N., Garcia, A. and McBeath, K. (2007). Engineering undergraduate attri-
tion and contributing factors. International Journal of Engineering Education, 23(5),
929–940.
Haak, D.C., HilleRisLambers, J., Pitre, E., and Freeman, S. (2011). Increased structure and
active learning reduce the achievement gap in introductory biology. Science, 332(6034),
1213–1216.
Habley, W.R., Bloom, J.L., and Robbins, S. (2012). Increasing Persistence: Research-Based
Strategies for College Student Success. Hoboken, NJ: John Wiley & Sons.
Harry, M.J., and Schroeder, R.R. (2005). Six Sigma: The Breakthrough Management Strategy
Revolutionizing the World’s Top Corporations. New York: Currency.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INCREASE STUDENTS’ MASTERY OF STEM CONCEPTS AND SKILLS 83

Henderson, C., and Dancy, M. (2007). Barriers to the use of research-based instructional strat-
egies: The influence of both individual and situational characteristics. Physical Review
Special Topics: Physics Education Research, 3(2), 020102.
Henderson, C., and Dancy, M.H. (2009). Impact of physics education research on the teach-
ing of introductory quantitative physics in the United States. Physical Review Special
Topics—Physics Education Research, 5(2), 020107. Available: https://journals.aps.org/
prper/abstract/10.1103/PhysRevSTPER.5.020107 [February 2018].
Henderson, C., Beach, A., and Finkelstein, N. (2011). Facilitating change in undergraduate
STEM instructional practices: An analytic review of the literature. Journal of Research
in Science Teaching, 48(8), 952–984. doi.org/10.1002/tea.20439.
Henderson, C., Dancy, M., and Niewiadomska-Bugaj, M. (2012). Use of research-based
instructional strategies in introductory physics: Where do faculty leave the innovation-
decision process? Physical Review Special Topics—Physics Education Research, 8(2),
020104.
Henderson, C., Turpen, C., Dancy, M., and Chapman, T. (2014). Assessment of teaching
effectiveness: Lack of alignment between instructors, institutions, and research recom-
mendations. Physical Review Special Topics—Physics Education Research, 10, 010106.
Holland, J.M., Major, D.A., and Orvis, K.A. (2012). Understanding how peer mentoring and
capitalization link STEM students to their majors. The Career Development Quarterly,
60(4), 343–354.
Jha, S., Noori, H., and Michela, J. (1996). The dynamics of continuous improvement. Aligning
organizational attributes and activities for quality and productivity. International Journal
of Quality Science, 1(1), 19–47.
Kober, N. (2015). Reaching Students: What Research Says About Effective Instruction in Un-
dergraduate Science and Engineering. Board on Science Education, Division of Behavioral
and Social Sciences and Education. Washington, DC: The National Academies Press.
Kozlowski, S.W.J., and Ilgen, D.R. (2006). Enhancing the effectiveness of work groups and
teams. Psychological Science in the Public Interest, 7(3), 77–124.
Kuh, G.D. (2008). High-Impact Educational Practices: What They Are, Who Has Access to
Them, and Why They Matter. Washington, DC: Association of American Colleges &
Universities.
Kuh, G.D., and O’Donnell, K. (2013). Ensuring Quality and Taking High-Impact Practices to
Scale. Washington, DC: Association of American Colleges & Universities.
Laursen, S., Hunter, A.B., Seymour, E., Thiry, H., and Melton, G. (2010). Undergraduate
Research in the Sciences: Engaging Students in Real Science. Hoboken, NJ: John Wiley
& Sons.
Lazry, N., Mazur, E., and Watkins, J. (2008). Peer instruction: From Harvard to the two-year
college. American Journal of Physics, 76(11), 1066–1069.
Lichtenstein, G., Chen, H.L., Smith, K.A., and Maldonado, T.A. (2014). Retention and persis-
tence of women and minorities along the engineering pathway in the United States. In A.
Johri and B.M. Olds (Eds.), Cambridge Handbook of Engineering Education Research.
New York: Cambridge University Press.
Light, R.J. (2004). Making the Most of College. Boston, MA: Harvard University Press.
Manduca, C.A., Iverson, E.R., Luxenberg, M., Macdonald, R.H., McConnell, D.A., Mogk,
D.W., and Tewksbury, B.J. (2017). Improving undergraduate STEM education: The ef-
ficacy of discipline-based professional development. Science Advances, 3(2), 1–15.
Mayhew, M.J., Rockenbach, A.N., Bowman, N.A., Seifert, T.A.D., Wolniak, G.C., Pascarella,
E.T., and Terenzini, P.T. (2016). How College Affects Students: 21st Century Evidence
that Higher Education Works (vol. 3). San Francisco, CA: Jossey-Bass.
Mazur, E. (1997). Peer Instruction: A User’s Manual. Upper Saddle River, NJ: Prentice Hall.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

84 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

McConnell, D.A., Steer, D.N., Owens, K.D., Knott, J.R., et al. (2006). Using ConcepTests to
assess and improve student conceptual understanding in introductory geoscience courses.
Journal of Geoscience Education, 54(1), 61–68.
Metzner, B.S. (1989). Perceived quality of academic advising: The effect on freshman attrition.
American Educational Research Journal, 26(3), 422–442.
Miller, E., and Trapani, J. (2016). AAU Undergraduate STEM Initiative: Measuring Progress.
Presentation to the Committee on Developing Indicators for Undergraduate STEM Edu-
cation, Washington, DC, April 1. Available: http://sites.nationalacademies.org/cs/groups/
dbassesite/documents/webpage/dbasse_183497.pdf [December 2017].
National Academies of Sciences, Engineering, and Medicine. (2016). Barriers and Opportuni-
ties for 2-Year and 4-Year STEM Degrees: Systemic Change to Support Students’ Diverse
Pathways. Washington, DC: The National Academies Press. Available: http://www.nap.
edu/catalog/21739/barriers-and-opportunities-for-2-year-and-4-year-stem-degrees [June
2016].
National Academies of Sciences, Engineering, and Medicine. (2017). Undergraduate Research
Experiences for STEM Students: Successes, Challenges, and Opportunities. Washington,
DC: The National Academies Press.
National Research Council. (2001). Testing Teacher Candidates: The Role of Licensure Tests
in Improving Teacher Quality. Washington, DC: National Academy Press.
National Research Council. (2012a). Discipline-Based Education Research: Understanding
and Improving Learning in Undergraduate Science and Engineering. Washington, DC:
The National Academies Press.
National Research Council. (2012b). Education for Life and Work: Developing Transferable
Knowledge and Skills in the 21st Century. Washington, DC: The National Academies
Press.
National Science Foundation. (2016). FY 2017 Budget Request. Arlington, VA: Author. Avail-
able: http://www.nsf.gov/about/budget/fy2017 [June 2016].
National Science and Technology Council. (2013). Federal STEM Education 5-Year Strategic
Plan. Available: https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/
stem_stratplan_2013.pdf [March 2017].
Ory, J.C., and Ryan, K. (2001). How do student ratings measure up to a new validity frame-
work? New Directions for Institutional Research, 2001(109), 27–44.
Packard. B.W. (2016). Successful STEM Mentoring Initiative for Underrepresented Students:
A Research-based Guide for Faculty and Administrators. Sterling, VA: Stylus.
Park, S., Hironaka, S., Carver, P., and Nordstrum, L. (2013). Continuous Improvement
in Education. Stanford, CA: Carnegie Foundation for the Advancement of Teaching.
Available: https://www.carnegiefoundation.org/wp-content/uploads/2014/09/carnegie-
foundation_continuous-improvement_2013.05.pdf [June 2017].
Parker, L.C., Adedokun, O., and Weaver, G.C. (2016). Culture policy and resources: Barriers
reported by faculty implementing course reform. In G.C. Weaver, W.D. Burgess, A.L.
Childress, and L. Slakey (eds.). Transforming Institutions: Undergraduate STEM Educa-
tion for the 21st Century. West Lafayette, IN: Purdue University Press.
Pascarella, E.T., and Terenzini, P.T. (2005). How College Affects Students (vol. 2). K.A.
Feldman (ed.). San Francisco, CA: Jossey-Bass.
Pascarella, E.T., Martin, G.L., Hanson, J.M., Trolian, T.L., Gillig, B., and Blaich, C. (2014).
Effects of diversity experiences on critical thinking skills over 4 years of college. Journal
of College Student Development, 55(1), 86–92. Available: http://aquila.usm.edu/cgi/
viewcontent.cgi?article=9211&context=fac_pubs [May 2017].
Peterson, M.W., and Spencer, M.G. (1990). Understanding academic culture and climate.
In W.G. Tierney (Ed.), Assessing Academic Cultures and Climates: New Directions for
Institutional Research (pp. 3-18). San Francisco, CA: Jossey-Bass.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

INCREASE STUDENTS’ MASTERY OF STEM CONCEPTS AND SKILLS 85

Pfund, C., Miller, S., Brenner, K., Bruns, P., Chang, A., Ebert-May, D., Fagen, A.P., Gentile,
J., Gossens, S., Khan, I.M, Labov, J.B., Pribbenow, C.M., Susman, M., Tong, L., Wright,
R., Yuan, R.T., Wood, W.B., and Handelsman, J. (2009). Professional development:
Summer institute to improve university science teaching. Science, 324(5926):470–471.
Available:http://science.sciencemag.org/content/324/5926/470.long [May 2017].
Pizzolato, J.E. (2008). Advisor, teacher, partner: Using the learning partnerships model to
reshape academic advising. About Campus, 13(1), 18–25.
President’s Council of Advisors on Science and Technology. (2012). Engage to Excel: Produc-
ing One Million Additional College Graduates with Degrees in Science, Technology,
Engineering, and Mathematics. Available: https://obamawhitehouse.archives.gov/sites/
default/files/microsites/ostp/pcast-engage-to-excel-final_2-25-12.pdf [July 2017].
Prince, M. (2004). Does active learning work? A review of the research. Journal of Engineering
Education, 93(3), 223–231.
PULSE Fellows. (2016). The PULSE Vision and Change Rubrics Version 2.0. Available:
http://api.ning.com/files/Kfu*MfW7V8MYZfU7LNGdOnG4MnryzUgUpC2IxdtUmucn
B4QNCdLaOwWGoMoULSeKw8hF9jiFdh75tlzuv1nqtfCuM11hNPp3/PULSERubrics
Packetv2_0_FINALVERSION.pdf [May 2017].
Roediger, H.L., Agarwal, P.K., McDaniel, M.A., and McDermott, K.B. (2011). Test-enhanced
learning in the classroom: Long-term improvements from quizzing. Journal of Experi-
mental Psychology: Applied, 17(4), 382–395.
Savage, A.F. (2014). Science literacy: A key to unlocking a fully-engaged citizenry. Diver-
sity and Democracy, 17(3). Available: https://www.aacu.org/diversitydemocracy/2014/
summer/savage [March 2017].
Savage, A.F., and Jude, B.A. (2014). Starting small: Using microbiology to foster scientific
literacy. Trends in Microbiology. http://dx.doi.org/10.1016/j.tim.2014.04.005.
Saxe, K., and Braddy, L. (2016). A Common Vision for Undergraduate Mathematical Sciences
Programs in 2025. Washington, DC: Mathematical Association of America.
Schein, E.H. (2004). Organizational Culture and Leadership (3rd ed.). San Francisco, CA:
Jossey-Bass.
Schein, E.H. (2010). Organizational Culture and Leadership (4th ed.). Hoboken, NJ: John
Wiley & Sons, Inc.
Schneider, B. (1975). Organizational climates: An essay. Personnel Psychology, 28(4), 447–479.
Schneider, B., and Reichers, A.E. (1983). On the etiology of climates. Personnel Psychology,
36(1), 19–39.
Schneider, B., Ehrhart, M.G., and Macey, W.H. (2013). Organizational climate and culture.
Annual Review of Psychology, 64, 361–388.
Seidman, A. (1991). The evaluation of a pre/post admissions/counseling process at a suburban
community college: Impact on student satisfaction with the faculty and the institution,
retention, and academic performance. College and University, 66(4), 223–232.
Simon, B., and Taylor, J. (2009). What is the value of course-specific learning goals? Journal
of College Science Teaching, 39(2), 52–57.
Sipple, S., and Lightner, R. (2013). Developing Faculty Learning Communities at Two-Year
Colleges: Collaborative Models to Improve Teaching and Learning. Sterling, VA: Stylus.
Smith, D. (2013). Describing and Measuring Undergraduate Teaching Practices. Wash-
ington, DC: American Association for the Advancement of Science. Available: http://
ccliconference.org/files/2013/11/Measuring-STEM-Teaching-Practices.pdf [May 2017].
Strayhorn, T.L. (2011). Bridging the pipeline: Increasing underrepresented students’ prepara-
tion for college through a summer bridge program. American Behavioral Scientist, 55(2),
142–159. doi: 10.11177/000276421038187.
Tewksbury, B.J., and MacDonald, R.H. (2005). Designing Effective and Innovative Courses.
Available: http://wp.stolaf.edu/cila/files/2014/08/Assignment_Part_1.2.pdf [July 2017].

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

86 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Titus, M.A. (2004). An examination of the influence of institutional context on student per-
sistence at 4-year colleges and universities: A multilevel approach. Research in Higher
Education, 45(7), 673–699.
Trujillo, G., Aguinaldo, P., Anderson, C., Busamante, J., Gelsinger, D., Pastor, M., Wright,
J., Marquez-Magana, L., and Riggs, B. (2015). Near-peer STEM mentoring offers unex-
pected benefits for mentors from traditionally underrepresented backgrounds. Perspec-
tives on Undergraduate Research and Mentoring, 4.1. Available: http://blogs.elon.edu/
purm/2015/11/11/near-peer-stem-mentoring-offers-unexpected-benefits-for-mentors [July
2016].
Tyler, R.W. (1949). Basic Principles of Curriculum and Instruction. Chicago, IL: University
of Chicago Press.
Weaver, G.C., Burgess, W.D., Childress, A.L., and Slakey, L. (2016). Transforming Institu-
tions: Undergraduate STEM Education for the 21st Century. West Lafayette, IN: Purdue
University Press.
Wieman, C., Perkins, K., and Gilbert, S. (2010). Transforming science education at large
research universities: A case study in progress. Change: The Magazine of Higher Learn-
ing, 42(2), 7–14.
Wiggins, G., and McTighe, J. (2005). Understanding by Design (second ed.). Alexandria, VA:
Association for Supervision and Curriculum Development.
Zabaleta, F. (2007). The use and misuse of student evaluations of teaching. Teaching in Higher
Education, 12(1), 55–76.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Goal 2: Strive for Equity,


Diversity, and Inclusion

E
quity, diversity, and inclusion are distinct concepts, and all three
are critically important to ensuring that the undergraduate STEM
educational system meets the nation’s needs and serves all people
(Witham et al., 2015). For quite some time, there has been ongoing discus-
sion about the compatibility of equity and excellence in STEM education
(e.g., Association of American Colleges & Universities, 2015; Gates, 1995;
Howard Hughes Medical Institute, 2016; Malcom et al., 1984). There is
growing recognition that in order to achieve excellence and effectiveness,
the STEM educational system needs to serve all students well (e.g., National
Academies of Sciences, Engineering, and Medicine, 2016). Therefore, the
committee’s second goal is for STEM undergraduate education to be equi-
table, diverse, and inclusive.
To be considered equitable, institutions and STEM departments would
provide enrolled students with adequate support to enter, persist, and suc-
cessfully complete STEM general education coursework or STEM degrees,
by engaging all students in evidence-based STEM educational practices
and programs.1 To be considered diverse, the national pool of students
participating and succeeding in undergraduate STEM education would be
representative of the demographics of the U.S. college student population.
STEM instructors, including faculty and graduate student educators, would
also reflect the national pool of individuals eligible to teach in undergradu-
ate STEM education. Finally, to be inclusive, undergraduate STEM learning
environments would need to effectively engage and educate diverse learners.

1 Such practices and programs are detailed in Chapter 3.

87

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

88 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

The major sections of this chapter address the committee’s four objec-
tives for Goal 2:

• Objective 2.1 Equity of access to high-quality undergraduate STEM


educational programs and experiences
• Objective 2.2 Representational diversity among STEM credential
earners
• Objective 2.3 Representational diversity among STEM instructors
• Objective 2.4 Inclusive environments in institutions and STEM
departments

These objectives reflect the optimal state of an equitable, diverse, and


inclusive undergraduate STEM educational system, and the following sec-
tions of this chapter focus on each one. Each section opens by describ-
ing the objective and summarizing research demonstrating its importance
for improving the quality and impact of undergraduate STEM education.
It then proposes indicators, which are characteristics that policy makers
would observe to monitor progress toward the objective: see Table 4-1. In
Appendix B, the committee offers potential measures for each indicator:
specific quantitative variables that provide a reliable method for monitoring
progress toward achieving the objective.
The numbers of women, minorities, economically disadvantaged peo-
ple, and people with disabilities who participate and earn bachelor’s degrees
in STEM fields have grown over the past several decades, though the pat-
terns of growth and participation vary by discipline and subpopulation
(National Science Foundation, 2017). Despite this progress, these groups
remain underrepresented among STEM degree earners, relative to their
representation among the nation’s adult population, to the enrolled college
student population, and to undergraduate degree earners (National Science
Foundation, 2017). As a recent National Academies of Sciences, Engineer-
ing, and Medicine (2016) report outlines, the reasons for the continued
underrepresentation of women, minorities, people who are economically
disadvantaged, and people with disabilities are numerous, complex, and
systemic. These factors include disparate levels of exposure to STEM in
K–12 schools and communities, unequal access to advanced coursework
in middle and high school mathematics and science, stratified patterns of
college attendance, unwelcoming disciplinary cultures, and “chilly” de-
partmental climates that do not include diverse role models. Because these
factors operate at different points along the pathways to a STEM under-
graduate degree, achieving equitable participation and outcomes in STEM
requires a multipronged approach, involving new practices, policies, and
structures across the educational system (National Academies of Sciences,
Engineering, and Medicine, 2016).

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

STRIVE FOR EQUITY, DIVERSITY, AND INCLUSION 89

TABLE 4-1  Objectives and Indicators of Equity, Diversity, and Inclusion


Objectives Indicators
2.1 Equity of access to high-quality 2.1.1 Institutional structures, policies, and
undergraduate STEM educational practices that strengthen levels of STEM
programs and experiences readiness for entering and enrolled college
students

2.1.2 Entrance to and persistence in STEM


academic programs

2.1.3 Equitable student participation in


evidence-based STEM educational practices

2.2 Representational diversity among 2.2.1 Diversity of STEM degree and certificate
STEM credential earners earners in comparison with diversity of degree
and certificate earners in all fields

2.2.2 Diversity of students who transfer from


2- to 4-year STEM programs in comparison
with diversity of students in 2-year STEM
programs

2.2.3 Time to degree for students in STEM


academic programs

2.3 Representational diversity among 2.3.1 Diversity of STEM instructors in


STEM instructors comparison with diversity of STEM graduate
degree holders

2.3.2 Diversity of STEM graduate student


instructors in comparison with diversity of
STEM graduate students

2.4 Inclusive environments in institutions 2.4.1 Students pursuing STEM credentials


and STEM departments feel included and supported in their academic
programs and departments

2.4.2 Instructors teaching courses in STEM


disciplines feel supported and included within
their departments

2.4.3 Institutional practices are culturally


responsive, inclusive, and consistent across
the institution

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

90 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

OBJECTIVE 2.1: EQUITY OF ACCESS TO HIGH-


QUALITY UNDERGRADUATE STEM EDUCATIONAL
PROGRAMS AND EXPERIENCES

Importance of the Objective


Advancing the nation’s STEM education system and workforce is a
goal shared by many of the nation’s leading scientific societies, higher edu-
cation associations, and science-related federal agencies. Within the past
several years, reports from the American Association for the Advancement
of Science (2011, 2015), the Obama Administration’s President Council
of Advisors on Science and Technology [PCAST] (2012), the Howard
Hughes Medical Institute (2016), and the Association of American Colleges
& Universities (Elrod and Kezar, 2015, 2016) have emphasized the need
to broaden participation in STEM disciplines and increase STEM degree
completion. To achieve these aims, these organizations recommend that
2-year and 4-year colleges and universities engage all students in STEM
fields by offering high-quality learning experiences with adequate academic
and institutional support to succeed in key introductory and gateway sci-
ence and mathematics courses (see also National Academies of Sciences,
Engineering, and Medicine, 2016). These reports on the shared goal of
broadening participation in STEM disciplines and increasing STEM degree
completion call for widespread implementation of new teaching, learning,
and student support programs that have been shown by research to enhance
student learning and persistence.
In developing indicators of the status of this objective, the committee
responded to its charge to focus on the U.S. undergraduate STEM education
system, rather than the inputs into this system (such as the K–12 system that
prepares entering undergraduates). Hence, rather than proposing indicators
of students’ STEM readiness upon high school exit (e.g., standardized test
scores, high school mathematics and science course taking patterns, etc.),
the committee focused on institutional practices that improve enrolled
college students’ STEM readiness, students’ STEM-related experiences,
and students’ progression through key educational milestones that lead to
STEM degree completion.
The committee’s approach was similar to that of an earlier report that
proposed indicators for monitoring the capacity of the nation’s K–12 edu-
cational system to advance the goals of expanding the number of STEM
degree seekers and enriching the STEM-capable workforce (National Re-
search Council, 2013). In addition to student outcomes, the indicator
system proposed by that prior committee included measures of schooling
environments, policies, and practices that have been demonstrated to be
effective in advancing the aforementioned goals. Similarly, the committee’s

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

STRIVE FOR EQUITY, DIVERSITY, AND INCLUSION 91

proposed indicators for this objective relate to institutional environments,


policies, and practices that have a demonstrably positive effect on student
entrance into, persistence within and completion of STEM degree programs.

Proposed Indicators

Indicator 2.1.1: Institutional Structures, Policies, and Practices That


Strengthen STEM Readiness for Entering and Enrolled College Students
The share of first-year students at 4-year colleges and universities indi-
cating an intention to major in STEM fields grew from about one-third in
2007 to 45 percent in 2014 (National Science Foundation, 2016). However,
the share of all bachelor’s degrees awarded in STEM fields has stayed steady
at about one-third (National Science Foundation, 2016). There is no single
cause of the gap between student intentions and outcomes in STEM, but
research shows that students who change their initial intentions to pursue
STEM bachelor’s degrees switch out of these degree fields after their experi-
ences in introductory mathematics and science coursework (National Acad-
emies of Sciences, Engineering, and Medicine, 2016). Moreover, although
historically underrepresented populations report their intentions to pursue
STEM bachelor’s degrees at rates that are comparable to or slightly below
the average for all students (National Science Foundation, 2017), STEM
degree completion rates for Black, Hispanic, and Native American students
fall far below those for Asians and whites (National Academies of Sciences,
Engineering, and Medicine, 2016). Previous research (e.g., Estrada, 2014)
has identified critical forms of student support and co-curricular educa-
tional practices that institutions can put into place to smooth the pathways
to STEM degree completion for a range of student populations, particularly
those students who are underrepresented in STEM fields.
Following the recommendations in a previous report of the National
Academies of Sciences, Engineering, and Medicine (2016), the committee
underscores the need for institutions to take active steps to ensure equitable
access to evidence-based STEM educational practices—even in the face of
inequities in STEM preparedness among admitted and enrolled students.
The proposed indicator above would provide information about the preva-
lence of institutional and instructional programs and practices that are de-
signed to strengthen students’ competencies and skills that are foundational
to success in STEM undergraduate degree programs. The focus of these
programs and practices might be mathematics, writing, or critical thinking
or they might be intended to provide students with the navigational and
academic skills necessary to succeed in college.
Depending on the type of institution, these programs and practices
may take the form of dual enrollment programs (An, 2013; Blankenberger,

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

92 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Lichtenberger, and Witt, 2017; Speroni, 2011), bridge programs (Gilmer,


2007; Lenaburg et al., 2012), accelerated curriculum for developmental
mathematics (Rutschow and Diamond, 2015), and the use of guided path-
ways (Bailey, Jaggars, and Jenkins, 2015; Baker, 2016). These examples are
just starting points, but they reflect the type of programs and practices that
have been implemented across a range of postsecondary institutional types,
selectivity, and mission. Such programs and practices have been shown to
support diverse students who have been admitted to college with gaps in
their STEM-related knowledge (Lenaburg et al., 2012; Baker, 2016; Gilmer,
2007; Rutschow and Diamond, 2015; Speroni, 2011). Thus, these institu-
tional supports can create more equitable conditions that help students to
begin and succeed in STEM fields.
The committee recognizes that institutions vary in size and resources
and thus in their capacity to offer such programs and practices, as well as
in the number of students they can serve with these programs and practices.
For this reason, it is important to disaggregate the specific measures of Indi-
cator 2.1.1 by type of 2-year or 4-year institution (e.g., research university,
liberal arts college, for-profit or nonprofit 2-year college), special mission
status (i.e., minority-serving institution), selectivity, and size.

Indicator 2.1.2: Entrance to and Persistence in STEM Academic Programs


Students of color, women, low-income students, and first-generation
students are less likely to enter and persist in undergraduate STEM edu-
cational programs than other students, further contributing to inequities
in STEM degree completion (National Science Foundation, 2017). And
although women, men, students of color, and white students, on average,
switch out of STEM degree programs to non-STEM programs at similar
rates, equity gaps in persistence rates exist within certain STEM disciplines
and at highly selective institutions (Chen, 2013). These differences in per-
sistence act to worsen disparate levels of access to critical STEM learning
experiences that might occur with advanced-level coursework (e.g., junior
laboratory, capstone experiences, senior design courses). Given the im-
portance of entrance to and persistence in STEM undergraduate degree
programs to advancing the goal of equity and diversity in undergraduate
STEM education, this indicator would consist of multiple measures of
student entry and persistence in STEM majors, disaggregated by race and
ethnicity, gender, socioeconomic status, ability status, and STEM discipline.
The committee notes that this indicator overlaps with Indicator 3.2.1
on course-to-course and year-to-year retention in STEM programs (see
Chapter 5), but that one focuses on overall persistence in STEM; this one
would break down the overall persistence trends, illuminating the trends in
persistence of different demographic groups.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

STRIVE FOR EQUITY, DIVERSITY, AND INCLUSION 93

Developing and maintaining high levels of interest in STEM for all en-
tering college students are necessary conditions for advancing equity and di-
versity in undergraduate STEM education. While first-year college students
likely develop such interest over the course of their precollege educational
experiences, colleges and universities also play a role in promoting and
maintaining their interest in pursuing STEM degrees. For example, institu-
tions may be able to bolster interest in STEM among first-year students by
providing early exposure to the range of STEM program offerings, offer-
ing opportunities for career exploration, and providing both entering and
first-year students with individualized advising and degree planning (see
discussion of advising and mentoring in Chapter 3).
Achieving greater equity and diversity in undergraduate STEM edu-
cation also requires that students who express an interest in STEM early
in their college experiences actually enter into and persist in STEM de-
gree programs. Indicator 2.1.2 (above) also includes measures of student
persistence—the extent to which students who enter into a STEM degree
program maintain enrollment in that program. Again, disaggregating by
key student characteristics and STEM discipline would indicate whether
certain student populations are more likely to leave STEM fields than
others.

Indicator 2.1.3: Equitable Student Participation in Evidence-Based STEM


Educational Practices and Programs
Evidence-based STEM educational practices and programs can take
many forms and can be provided inside or outside the classroom. Inside
the classroom, discipline-based education research has deepened our un-
derstanding of teaching approaches that improve student learning within
and across STEM disciplines (National Research Council, 2012). These
evidence-based teaching practices have been shown to increase students’
mastery of STEM concepts and skills. They include interactive lectures,
collaborative learning activities, lecture-tutorial approaches, and labora-
tory experiences that incorporate realistic scientific practices and the use of
technology (National Research Council, 2012).
At the same time, research has begun to show that programs outside
the classroom—such as undergraduate research experiences, internships,
and summer bridge programs—can support students’ learning, identifica-
tion with STEM, and persistence in STEM programs (National Academies
of Sciences, Engineering, and Medicine, 2016, 2017). For example, studies
of STEM majors found that participating in STEM-specific programs en-
riched their understanding of how STEM content is related to real-world
applications (e.g., Estrada, 2014). When implemented well, such programs
can provide valuable learning experiences for students, potentially boost-

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

94 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

ing their interest in STEM fields, strengthening their STEM identities, and
cultivating graduate school aspirations (Eagan et al., 2013). There is also
some evidence that these educational practices and programs are par-
ticularly effective in increasing learning and retention among historically
underrepresented students in STEM, making access to and participation
in such practices vital to advancing equity in STEM outcomes and degree
attainment.
This indicator would provide information about the patterns of ac-
cess to and participation in evidence-based STEM educational practices
and co-curricular programs. It would be disaggregated across student de-
mographic groups (race and ethnicity, gender, socioeconomic status and
ability status), institutional type (e.g., research university, liberal arts col-
lege, 2-year college), and STEM discipline. Students’ engagement in such
practices and programs has been shown to promote mastery of STEM
concepts and skills and retention in STEM majors (e.g., National Research
Council, 2012). In addition, there is growing evidence that participation in
certain evidence-based programs outside the classroom (e.g., undergradu-
ate research, mentoring, bridge programs) can boost STEM retention and
career and graduate school aspirations (Eagan et al., 2013; Estrada, 2014;
Gilmer, 2007; Lenaburg et al., 2012; Packard, 2016). For example, a recent
review of research related to undergraduate research experiences (National
Academies of Sciences, Engineering, and Medicine, 2017) concluded that
participation in this type of evidence-based educational practice is beneficial
for all students and, for students from historically underrepresented groups,
improves their persistence in STEM and helps to validate their disciplinary
identity.
Only limited research is available on the extent to which different stu-
dent groups participate in these valuable evidence-based educational prac-
tices. For example, data from the National Survey of Student Engage­ment
(NSSE) indicate that certain student groups that are historically under­
represented in STEM fields, including Blacks, Hispanics, Native Ameri-
cans, low-income students, and first-generation students, are less likely
than other students to participate in undergraduate research and five other
high-impact practices, though these data do not provide insight into the
patterns of participation specifically for STEM majors2 (Finley and M ­ cNair,
2013; N ­ ational Survey of Student Engagement, 2016). However, the recent
­National Academies (2017) study of undergraduate research experiences
found that data on who participates in these experiences overall or at
specific types of institutions have not been collected systematically, and
recommended that institutions collect those data.

2 As discussed in Chapter 3, these high-impact practices are not STEM-specific.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

STRIVE FOR EQUITY, DIVERSITY, AND INCLUSION 95

Research is needed to characterize differences among student groups on


the effects of engaging in evidence-based STEM educational practices and
programs, as well as to identify the mechanisms by which such practices
increase retention in STEM degree programs (National Research Council,
2012; National Academies of Sciences, Engineering, and Medicine, 2017).
This proposed indicator would facilitate such research, tracking group
differences in participation in evidence-based educational practices and
programs.

OBJECTIVE 2.2: REPRESENTATIONAL DIVERSITY


AMONG STEM CREDENTIAL EARNERS

Importance of the Objective


Expanded access to higher education generally, coupled with targeted
efforts to broaden participation in STEM fields, has led to larger numbers
of women, minorities, and other historically underserved populations earn-
ing STEM degrees. However, the total number of undergraduate credentials
earned by these populations in all fields grew at a faster rate. Thus, the pro-
portion of STEM degrees earned by some of these underrepresented groups
has not increased—and has even gone down in certain STEM disciplines.
Trend data on the diversity of STEM degree earners reflects the “declining
equity” of U.S. higher education, whereby inequities experienced by race
and ethnicity and economically disadvantaged students are actually wors-
ening, despite overall increases in college attendance (Astin and Oseguera,
2004; Malcom-Piqueux and Malcom, 2013). If students from historically
underrepresented groups entered and completed STEM degree programs at
the same rate as their white, male counterparts in these programs, then we
would expect that STEM degree earners would reflect the diversity of all
college degree earners.
Outcome equity is defined as having reached parity in key educational
milestones and outcomes (e.g., declaring a STEM major or earning STEM
degrees). In other words, if a particular demographic group represents x
percent of some base population and also earns x percent of STEM bach-
elor’s degrees, that group has achieved outcome equity. If the demographic
breakdown of STEM degree earners mirrors the demographic breakdown
of an appropriate base population, it can be said that the undergraduate
STEM education system has achieved representational diversity among
STEM degree earners. Though outcome equity focuses only on the outputs
of the educational system, reaching representational diversity among STEM
credential earners is an indication that institutional practices, policies, and
structures are functioning as they should to serve all student populations
well.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

96 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Proposed Indicators

Indicator 2.2.1: Diversity of STEM Degree and Certificate Earners in


Comparison with Diversity of Undergraduate Degree and Certificate
Earners in All Fields
This proposed indicator would compare the demographic make-up of
students who earned a STEM undergraduate credential within a given time
period to the demographic composition of the total number of students
who earned an undergraduate credential in any field during that same
time period. Both groups would be disaggregated by race and ethnicity,
gender, socioeconomic status, and ability status. The committee identified
undergraduate degree and certificate earners in all fields as the appropriate
population for comparison with STEM undergraduate degree and creden-
tial earners. Other comparisons are possible (e.g., diversity of STEM degree
holders compared with the diversity of the U.S. adult population); however,
the committee selected all credential earners because these individuals were
able to successfully navigate higher education environment to complete
their degree or certificate programs.
Though the indicator would be reported at the national level, it would
be vitally important to disaggregate the related measures by institutional
type (e.g., research university, liberal arts college, 2-year institution) and
special status (e.g., minority-serving institution). The committee recognizes
that because of long-standing inequities in college access and enrollment,
achieving equity in STEM degree attainment will require that viable path-
ways to STEM be established across a broad range of institutions, including
2-year institutions and minority-serving institutions (National Academies
of Sciences, Engineering, and Medicine, 2016). The nation’s higher edu-
cation system is highly stratified, with Blacks, Hispanics, economically
disadvantaged students, and first-generation college students more likely
than other students to attend 2-year institutions and broad-access 4-year
institutions (Van Noy and Zeidenberg, 2014). These institutions tend to be
under-resourced, offering fewer STEM degree programs and lacking the in-
structional and research capacity found at more selective doctoral-granting
and research universities. As a result, historically underrepresented students
tend to experience lower levels of access to undergraduate STEM degrees,
particularly in emergent fields.
Group differences in students’ academic preparation for success in
STEM fields reflect a complex array of factors that operate at the individ-
ual, disciplinary, departmental, institutional, and systemic levels (National
Academies of Sciences, Engineering, and Medicine, 2016; National Science
Foundation, 2017; Malcom-Piqueux and Malcom, 2013). Thus, the com-
mittee understands that the patterns of STEM credential attainment may

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

STRIVE FOR EQUITY, DIVERSITY, AND INCLUSION 97

not exactly mirror the patterns from non-STEM fields, which provide the
comparison group in this indicator. Nevertheless, the presence of large
equity gaps in STEM degree attainment at the national level would indi-
cate that the undergraduate STEM education system is not functioning as
intended and is failing to meet the needs of all students.
The committee notes that comparisons with other base populations
(e.g., all college students, national adult population) may also be informa-
tive. For example, a recent report on the status of women, minorities, and
persons with disabilities in science and engineering (National Science Foun-
dation, 2017) presents data on the proportion of STEM degrees earned
by historically underrepresented populations along with the demographic
breakdown of the U.S. noninstitutionalized adult population. Using this
broader base population as the comparison group shows larger equity
gaps because of the compounding of inequalities in the rates of high school
completion, college entrance, and entry to STEM degree programs expe-
rienced by historically underrepresented populations. In order to ensure
that any inequities identified from the committee’s proposed indicator are
attributable only to the undergraduate STEM educational system, our pro-
posed comparison group is all undergraduate degree and certificate earners.

Indicator 2.2.2: Diversity of Students Who Transfer from 2-Year to


4-Year STEM Programs in Comparison with Diversity of Students in
2-Year STEM Programs
The nation’s 2-year institutions play an increasingly important role in
educating and preparing the nation’s STEM workforce (American Asso­
ciation of Community Colleges, 2014; National Science Foundation, 2017;
Wang, 2015). Many certificate and associate degree programs in STEM-
related technical areas (e.g., aerospace, nuclear technology) lead to high-
paying jobs in high-demand fields (American Association of Community
Colleges, 2014). In addition to awarding valuable sub-baccalaureate cre-
dentials to a diverse range of students, 2-year institutions also act as a path-
way to the bachelor’s degree through student transfers. Though national
data cannot fully characterize the exact role that 2-year institutions play in
the educational experiences of STEM bachelor’s degree earners (see Bahr
et al., 2016; Wang, 2013), nearly half of STEM bachelor’s degree holders
attended a 2-year institution at some point in their educational careers
(Mooney and Foley, 2011).
Indeed, recent analyses (National Academies of Sciences, Engineer-
ing, and Medicine, 2016) find that students who earn STEM credentials
follow complicated educational trajectories. Attending multiple institu-
tions, reverse transfers, lateral transfers, and concurrent enrollment are
increasingly common college attendance patterns among STEM credential

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

98 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

earners. Transferring from 2-year to 4-year institutions has great potential


for increasing the representation of women, minorities, economically dis-
advantaged students, and persons with disabilities among STEM bachelor’s
degree holders due to the high proportion of these populations enrolled in
2-year institutions (Bragg, 2012; Wang, 2015). More than 40 percent of
all undergraduate students are enrolled in 2-year institutions, and the per-
centage is even higher among minorities, first-generation, and low-income
populations (National Center for Education Statistics, 2016).
Policy makers can use this proposed indicator to monitor whether his-
torically underrepresented students, who are more likely to begin postsec-
ondary education in 2-year institutions, have equitable access to pathways
to STEM bachelor’s degrees. In particular, this indicator would identify the
extent to which the representation of women, minorities, low-income stu-
dents, and students with disabilities among students at 2-year institutions
who transfer to STEM bachelor’s degree programs mirrors their representa-
tion among all students enrolled in 2-year STEM programs.
Certainly, not all students who enter 2-year institutions plan or seek to
transfer. As mentioned above, many certificate and associate’s degree earn-
ers in STEM-related technical areas are highly employable, with earnings
exceeding bachelor’s degree holders in some fields. However, the under-
representation of women, minorities, low-income students, and students
with disabilities among students transferring to STEM bachelor’s degree
programs in comparison with their share of enrollment in 2-year STEM
programs may indicate that these students are experiencing unique barri-
ers to successful transfer that need to be better understood and mitigated
through institutional policies and programs.
This indicator is related to an indicator under Goal 3: Transfer between
STEM programs (see Chapter 5). The indicator here compares the diversity
of students who transfer from 2-year to 4-year STEM programs to the di-
versity of students enrolled in 2-year STEM programs; the indicator under
Goal 3 (3.2.2) compares the diversity of students transferring from 2-year
to 4-year STEM programs with the diversity of students transferring into
all 4-year programs. For both indicators, it would be valuable to have the
information disaggregated by institutional type and a range of demographic
characteristics. Monitoring this Indicator 2.2.2 will inform policy makers
and practitioners about whether historically underrepresented populations
enjoy equitable access to transfer pathways from the 2-year institution to
STEM bachelor’s degree programs.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

STRIVE FOR EQUITY, DIVERSITY, AND INCLUSION 99

Indicator 2.2.3: Time to Degree for Students in STEM Academic


Programs
In addition to degree completion, time to degree is also an important
indicator of success among STEM majors. The majority of entering college
students who intend to pursue STEM bachelor’s degrees do not complete
them within 4 years (Eagan et al., 2014). In addition to switching out of
STEM degree programs entirely, many STEM aspirants attend multiple
institutions and stop out or drop out of college (Eagan et al., 2014). Atten-
dance patterns and time to degree vary, based on students’ pathways into
college, STEM discipline, degree-granting institutional type, and student
characteristics. For example, it is not surprising that STEM majors who at-
tend multiple institutions take longer to complete their degrees than those
who do not (Salzman and Van Noy, 2014). Similarly, students who decide
to major in STEM after the first semester or first year in college experi-
ence longer time to degree than those who begin as STEM majors. Among
STEM bachelor’s degree earners, women and students who are minorities at
predominantly white institutions take longer to complete their degrees than
do their male, white, and Asian counterparts, respectively (Eagan et al.,
2014). These differences in time-to-degree may be attributable to a number
of factors, including differences in the pathways that historically under-
represented populations take into and through STEM degree programs
and differences in academic preparation at college entry. Thus, extended
time-to-degree could indicate success in efforts to diversify the population
of STEM graduates.
However, gender, racial, and socioeconomic differences in time-to-
degree among STEM graduates may also be further disadvantaging students
who already experience inequities in STEM educational outcomes. Though
taking longer than 4 years to complete a STEM degree is not inherently
problematic, there are potential drawbacks. Students who take longer to
complete their degrees have to pay college tuition beyond the 4 years that
they may have originally anticipated. If these students exhaust their eligi-
bility for Pell grants or other financial aid, they may have no choice but to
accumulate high student debt burdens, work to cover college costs (which
will likely further delay STEM degree completion), or leave college alto-
gether (National Academies of Sciences, Engineering, and Medicine, 2016).
In addition, there are opportunity costs (i.e., foregone earnings) associated
with taking longer to complete a degree. Thus, this indicator will permit
the monitoring of the median time-to-degree across a range of STEM de-
gree programs, disaggregated by race and ethnicity, gender, socioeconomic
status, and disability status.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

100 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

OBJECTIVE 2.3: REPRESENTATIONAL DIVERSITY


AMONG STEM INSTRUCTORS

Importance of the Objective


Research shows that one effective mechanism by which to address is-
sues of equity and diversity in undergraduate education is to cultivate and
retain a diverse cadre of instructors, including tenured and tenure-track
faculty, adjunct and part-time instructors, and graduate student instructors.
Instructor diversity provides educational benefits to all students and par-
ticularly for students of color (Antonio, 2002; Hurtado, 2001; Hurtado et
al., 2012; Ibarra, 2001; Marin, 2000; Milem, 2001, 2003; Milem, Chang,
and Antonio, 2005; Smith, 2015; Umbach, 2006).
The committee uses the word “representational” in this objective pur-
posefully: ultimately, the diversity of instructors should reflect or represent
the ethnic and racial diversity of U.S. society as a whole, given that the ben-
efits of instructor diversity are clearly demonstrated by available research.
The committee’s indicators, proposed below, would help to gauge the pres-
ence of diverse faculty at the national level as a first step toward achieving
the objective. Student perceptions of campus and departmental climate are
critical to the success of students of color. Evidence consistently shows that
when students of color perceive their campus to be racially diverse and
inclusive, those students perform better academically (Gurin et al., 2002;
Hurtado et al., 2012; Smith, 2015). Research also shows that racially and
ethnically diverse campuses support all students’ intellectual development
by increasing students’ learning outcomes (Gurin et al., 2002), enhancing
critical thinking (Bowman, 2010), and improving intellectual self-concept
(Cole, 2007) and civic engagement (Bowman, 2011). Since faculty diversity
is one major signal of inclusive and diverse campus climates, maintaining
sufficiently diverse faculty members can be seen as one way to support
improved outcomes for all students.

Proposed Indicators
Although instructor diversity is a concern across all fields, the represen-
tation of historically underrepresented groups among STEM instructors is
even lower than overall levels. Thus, the committee proposes the following
two indicators that can be used to monitor progress toward representa-
tional diversity among STEM instructors. The first indicator focuses on
faculty, whether tenured, untenured, adjunct, full time or part time. The
second focuses on graduate student instructors. We present them together,
and the discussion that follows covers both.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

STRIVE FOR EQUITY, DIVERSITY, AND INCLUSION 101

Indicator 2.3.1: Diversity of STEM Instructors in Comparison with


Diversity of STEM Graduate Degree Holders

Indicator 2.3.2: Diversity of STEM Graduate Student Instructors in


Comparison with Diversity of STEM Graduate Students
Both of these indicators would allow policy makers to monitor the
extent to which the diversity of STEM instructors (in terms of gender,
race and ethnicity, socioeconomic status, and disability status) reflects the
diversity of current STEM graduate degree holders and graduate students,
respectively. This framing is intended to reflect what is reasonable to expect
from the field: it is unlikely that the diversity of STEM faculty would ever
exceed the diversity of STEM graduate degree holders, as a graduate degree
is a precursor to faculty status. By measuring the diversity of STEM faculty
members as compared to the entire pool of potential STEM faculty mem-
bers (represented here by STEM graduate degree holders) these indicators
are intended to measure progress toward the objective of representational
diversity among STEM educators.
The first indicator would need to be disaggregated by institutional
type, as well as STEM discipline and instructors’ demographic character-
istics, due to the differences in degree requirements to hold a faculty posi-
tion. Many 2-year institutions and some 4-year institutions require only
a master’s degree to be hired as an instructor, while research institutions
will only hire doctorate holders for faculty positions. As a result, we note
the importance of carefully defining the comparison groups to construct
Indicator 2.3.1.
The second indicator would also require careful disaggregation and in-
terpretation. Data from the National Science Foundation, (2017, Tbl. 7-26)
illustrate that in some STEM disciplines (i.e., physical sciences, computer
science), historically underrepresented minority doctorate recipients are
more likely than their white counterparts to have received their primary
support for their studies from teaching assistantships rather than research
assistantships. In other STEM disciplines (i.e., agricultural sciences, com-
puter science) women of all races and ethnicities are more likely than their
male counterparts to serve as teaching assistants. Given the importance of
research experience and publications to moving into STEM faculty careers
at many institutions, significant overrepresentation of students of color or
women among STEM graduate educators may be indicative of other equity
challenges.
These indicators only measure the current state of affairs of diversity in
STEM. We recognize the tautological limitation of this framing: if currently
underrepresented student groups remain underrepresented in STEM, the
same lackluster representation will remain in the degree-holding popula-

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

102 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

tion. Currently, the diversity of STEM faculty does not reflect the diversity
of STEM graduate degree holders. We propose that these indicators will
allow for monitoring change in the diversity of STEM educators.

OBJECTIVE 2.4: INCLUSIVE ENVIRONMENTS IN


INSTITUTIONS AND STEM DEPARTMENTS

Importance of the Objective


Inclusive campus environments are those that focus on students’ intel-
lectual development, develop and use organizational resources to enhance
student learning in purposeful ways, pay attention to and value the cultural
differences that learners bring to educational experiences, and are welcom-
ing communities that engage diversity in the service of student and orga-
nizational learning (Williams, Berger, and McClendon, 2005). In short, an
inclusive campus environment integrates diversity and equity into the core
mission of the institution (Witham et al., 2015).
Cultivating inclusive, supportive learning environments in which all
students and instructors can learn and work may help catalyze and sus-
tain institutional and national progress toward achieving greater diversity
and equity in undergraduate STEM education. Underrepresented minority
students in STEM who experience negative racial experiences in their first
2 years also tend to report a lower sense of belonging to their institution
(Hurtado et al., 2007). Similarly, as first-year students majoring in science
more frequently encounter a negative racial climate on campus, their likeli-
hood of persisting in those same science majors significantly drops (Chang
et al., 2009). Hurtado and colleagues (2012) provide a comprehensive
review of empirical studies of campus climate, and the evidence overwhelm-
ingly shows that minority students tend to encounter a more hostile envi-
ronment on college campuses than other students. These students often feel
more disconnected from their institution and tend to report lower scores
on critical affective (e.g., self-efficacy) and cognitive outcomes (e.g., grades,
degree completion).
Climate is also manifested more locally in departments and academic
programs. Environments that encourage unhealthy competition among
students may reduce opportunities for group work and collaborative study
among students. Gasiewski and colleagues (2012) found that students en-
rolled in introductory STEM courses felt less inclined to work with their
peers when instructors applied norm-referenced (curved) grading strategies
in which students’ grades depend on their performance relative to their
peers. Similarly, students’ perceptions of departmental climates may be in-
fluenced by instructors’ pedagogical strategies (e.g., lecture or group work)
or by the ways in which instructors signal their openness to and accessibility

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

STRIVE FOR EQUITY, DIVERSITY, AND INCLUSION 103

for engaging with students inside and outside the classroom (Gasiewski et
al., 2012).
Instructors’ perceptions of their departmental and institutional climates
may affect their willingness to experiment with enhancing their teaching
strategies, engagement with colleagues and students, and likelihood to want
to continue working at their current institution. Certain groups of instruc-
tors—primarily women and minorities—report higher levels of stress due to
discrimination or more hostile climates on campus (Turner and González,
2011). These factors tend to reduce overall job satisfaction (Sanderson,
Phua, and Herda, 2000) and thus increase the chances of their leaving their
academic appointments (Ponjuan, 2006; Rosser, 2004).

Proposed Indicators
Because the first two indicators of this objective are closely related, they
are presented together, and the discussion that follows covers both.

Indicator 2.4.1: Students Pursuing STEM Credentials Feel Included and


Supported in Their Academic Programs and Departments

Indicator 2.4.2: Instructors Teaching Courses in STEM Disciplines Feel


Included and Supported in Their Departments
STEM students’ and instructors’ experiences with supportive and inclu-
sive campus and departmental climates likely serve as leading indicators of
whether progress is being made toward the objectives of representational
diversity among STEM credential earners and STEM instructors. Percep-
tions about one’s environment are inherently personal, but when these
views are aggregated across representative samples of students and faculty,
one can make inferences about the overall nature of inclusion and support.
The first indicator above would gauge how positively STEM students feel
about their academic departments and/or institutional environments; the
second would do the same for instructors. For both indicators, it is critical
that data are disaggregated by race and ethnicity, gender, disability status,
discipline, and institutional type in order to determine whether the sense of
inclusion and support differ along these dimensions.

Indicator 2.4.3: Institutional Practices Are Culturally Responsive,


Inclusive, and Consistent Across the Institution
This indicator is intended to measure the policies and practices of
institutions, and STEM departments, including support programs, and
co-curricular activities that contribute to and cultivate climates that are

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

104 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

diverse, inclusive, and equitable. Such practices and policies include those
that aim to engage diverse learners in and outside of STEM classrooms in
order to produce more equitable outcomes. Though related to the indica-
tor above on institutional structures, policies, and practices that strengthen
levels of STEM readiness for entering college students (Indicator 2.1.1),
this indicator is distinct because it focuses on the experiences of students
enrolled in STEM academic programs, not on practices intended to increase
student readiness for such programs.
Students who are historically underrepresented in STEM fields of-
ten face unique challenges (stereotypes, stereotype threat, implicit bias)
that negatively affect their ability to enter, persist, and succeed in STEM
fields (Fries-Britt and Griffin, 2007; Godsil, 2016; Martin, 2009; McGee
and Martin, 2011). Culturally responsive pedagogies and instructional
approaches include those that are interactive, asset-based, and focus on
building student identities. These practices and approaches have been dem-
onstrated to build more equitable learning environments in some educa-
tional contexts (Gay, 2000; Ladson-Billings, 1995; Moses et al., 1989;
Nasir et al., 2014; Paris, 2012), but they have not been widely imple-
mented in STEM at the postsecondary level (Davis, Hauk, and Latiolais,
2009). For example, most mathematics classrooms use standard lecture
formats (Eagan, 2016). Though additional research is needed, a growing
evidence base suggests that developing diverse students’ sense of belong-
ing and improving student-faculty interactions is conducive to broadening
participation in STEM fields. The committee envisions that this indicator
would allow for the monitoring of data about the prevalence of culturally
responsive educational approaches used in STEM departments. Recruiting
and retaining a diverse STEM faculty is also critical to inclusive educa-
tional environments (as discussed above). Thus, the indicator would include
measures about the use of search and hiring practices that are effective in
diversifying STEM faculty (e.g., implicit bias training).

REFERENCES
American Association for the Advancement of Science. (2011). Vision and Change in Un-
dergraduate Biology Education: A Call to Action. Washington, DC: Author. Available:
http://visionandchange.org/finalreport [July 2017].
American Association for the Advancement of Science. (2015). Vision and Change in Under-
graduate Biology Education: Chronicling Change, Inspiring the Future. Washington, DC:
Author. Available: http://visionandchange.org/files/2015/07/VISchange2015_webFin.pdf
[July 2017].
American Association of Community Colleges. (2014). Datapoints: High Paying Occupations.
Washington, DC: Author. Available: http://www.aacc.nche.edu/Publications/datapoints/
Documents/HighOccupations_10.14.pdf [July 2017].
An, B.P. (2013). The impact of dual enrollment on college degree attainment: Do low-SES
students benefit? Educational Evaluation and Policy Analysis, 35(1), 57–75.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

STRIVE FOR EQUITY, DIVERSITY, AND INCLUSION 105

Antonio, A.L. (2002). Faculty of color reconsidered: Reassessing contributions to scholarship.


Journal of Higher Education, 73(5), 582–602.
Association of American Colleges & Universities. (2015). Committing to Equity and Inclusive
Excellence: A Campus Guide for Self-Study and Planning. Washington, DC: Author.
Available: https://www.aacu.org/publications/committing-to-equity [July 2017].
Astin, A.W., and Oseguera, L. (2004). The declining “equity” of American higher education.
The Review of Higher Education, 27(3), 321–341.
Bahr, P.R., Jackson, G., McNaughtan, J., Oster, M., and Gross, J. (2016). Unrealized potential:
Community college pathways to STEM baccalaureate degrees. The Journal of Higher
Education, 88(3), 430–478. doi:10.1080/00221546.2016.1257313.
Bailey, T., Jaggars, S.S., and Jenkins, D. (2015). What We Know About Guided Pathways.
New York: Columbia University, Teachers College, Community College Research Center.
Baker, R. (2016). The effects of structured transfer pathways in community colleges. Educa-
tional Evaluation and Policy Analysis, 38(4), 626–646. doi: 10.3102/0162373716651491.
Blankenberger, B., Lichtenberger, E., and Witt, M.A. (2017). Dual credit, college type, and
enhanced degree attainment. Educational Researcher, 46(5), 259–263.
Bowman, N.A. (2010). College diversity experiences and cognitive development: A meta-
analysis. Review of Educational Research, 80(1), 4–33.
Bowman, N.A. (2011). Promoting participation in a diverse democracy a meta-analysis of
college diversity experiences and civic engagement. Review of Educational Research,
81(1), 29–68.
Bragg, D.D. (2012). Two-year college mathematics and student progression in STEM pro-
grams of study. In S. Olson and J.B. Labov, Community Colleges in the Evolving STEM
Education Landscape: Summary of a Summit (pp. 81–105). Washington, DC: The Na-
tional Academies Press.
Chang, M.J., Eagan, M.K., Lin, M.H., and Hurtado, S. (2009). Stereotype Threat: Under-
mining the Persistence of Racial Minority Freshmen in the Sciences. Paper presented
at Annual Meeting of the American Educational Research Association, San Diego,
CA. Available: https://www.heri.ucla.edu/nih/downloads/AERA%202009%20-%20
Chang,%20Eagan,%20Lin,%20Hurtado%20-%20Stereotype%20Threat.pdf [Septem-
ber 2017].
Chen, X. (2013). STEM Attrition: College Students’ Paths Into and Out of STEM Fields
(NCES 2014-001). Washington, DC: National Center for Education Statistics, Institute
of Education Sciences, U.S. Department of Education. Available: https://nces.ed.gov/
pubs2014/2014001rev.pdf [September 2017].
Cole, D. (2007). Do interracial interactions matter? An examination of student-faculty contact
and intellectual self-concept. The Journal of Higher Education, 78(3), 249–281.
Davis, M.K., Hauk, S., and Latiolais, M.P. (2009). Culturally responsive college level math-
ematics. In B. Greer, S. Mukhopadhyay, A.B. Powell, and S. Nelson-Barber (Eds.), Cultur-
ally Responsive Mathematics Education (pp. 345–372). New York: Routledge.
Eagan, M.K. (2016). Becoming More Student-Centered? An Examination of Faculty Teaching
Practices Across STEM and Non-STEM Disciplines Between 2004 and 2014. Available:
https://sloan.org/storage/app/media/files/STEM_Higher_Ed/STEM_Faculty_Teaching_
Practices.pdf [May 2017].
Eagan, M.K., Hurtado, S., Chang, M.J., Garcia, G.A., Herrera, F.A., and Garibay, J.C. (2013).
Making a difference in science education: The impact of undergraduate research pro-
grams. American Educational Research Journal, 50(4), 683–713.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

106 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Eagan, M.K., Hurtado, S., Figueroa, T., and Hughes, B. (2014). Examining STEM Pathways
among Students Who Begin College at Four-Year Institutions. Paper prepared for the
Committee on Barriers and Opportunities in Completing 2- and 4-Year STEM Degrees.
Available: http://sites.nationalacademies.org/cs/groups/dbassesite/documents/webpage/
dbasse_088834.pdf [July 2017].
Elrod, S., and Kezar, A. (2016). Increasing student success in STEM: An overview for a new
guide to systemic institutional change. In G.C. Weaver, W.E. Burgess, A.L. Childress, and
L. Slakey (Eds.), Transforming Institutions: Undergraduate STEM Education for the 21st
Century. West Lafayette, IN: Purdue University Press.
Elrod, S., and Kezar, A. (2015). Increasing student success in STEM. Peer Review, 17(2). Avail-
able: https://www.aacu.org/peerreview/2015/spring/elrod-kezar [July 2017].
Estrada, M. (2014). Ingredients for Improving the Culture of STEM Degree Attainment with
Co-curricular Supports for Underrepresented Minority Students. Paper prepared for the
Committee on Barriers and Opportunities in Completing 2- and 4-Year STEM Degrees.
Available: http://sites.nationalacademies.org/cs/groups/dbassesite/documents/webpage/
dbasse_088832.pdf [July 2017].
Finley, A., and McNair, T. (2013). Assessing Underserved Students’ Engagement in High-
Impact Practices. Washington, DC: Association of American Colleges & Universities.
Fries-Britt, S., and Griffin, K. (2007). The Black box: How high-achieving Blacks resist stereo-
types about Black Americans. Journal of College Student Development, 48(5), 509–524.
Gates, J. (1995). Equity vs. excellence: A false dichotomy in science and society. The Scientist,
9(14), 12.
Gay, G. (2000). Culturally Responsive Teaching: Theory, Practice and Research. New York:
Teachers College Press.
Gasiewski, J.A., Eagan, M.K., Garcia, G.A., Hurtado, S., and Chang, M.J. (2012). From gate-
keeping to engagement: A multicontextual, mixed method study of student academic en-
gagement in introductory STEM courses. Research in Higher Education, 53(2), 229–261.
Gilmer, T.C. (2007). An understanding of the improved grades, retention and graduation rates
of STEM majors at the Academic Investment in Math and Science (AIMS) Program of
Bowling Green State University (BGSU). Journal of STEM Education, 8(1), 11–21.
Godsil, R.D. (2016). Why race matters in physics class. UCLA Law Review Discourse, 64, 40.
Gurin, P., Dey, E., Hurtado, S., and Gurin, G. (2002). Diversity and higher education: Theory
and impact on educational outcomes. Harvard Educational Review, 72(3), 330–367.
Howard Hughes Medical Institute. (2016). Inclusive Excellence: Engaging All Students in
Science. Washington, DC: Howard Hughes Medical Institute. Available: https://www.
hhmi.org/sites/default/files/Programs/Inclusive/Inclusive-Excellence-2018-Program-
Announcement.pdf [September 2017].
Hurtado, S. (2001). Linking diversity and educational purpose: How diversity affects the class-
room environment and student development. In G. Orfield (Ed.), Diversity Challenged:
Evidence on the Impact of Affirmative Action (pp. 187–203). Cambridge, MA: Harvard
Education Publishing Group.
Hurtado, S., Han, J.C., Sáenz, V.B., Espinosa, L.L., Cabrera, N.L., and Cerna, O.S. (2007).
Predicting transition and adjustment to college: Biomedical and behavioral science aspi-
rants’ and minority students’ first year of college. Research in Higher Education, 48(7),
841–887.
Hurtado, S., Alvarez, C. L., Guillermo-Wann, C., Cuellar, M., and Arellano, L. (2012). A
model for diverse learning environments. In Higher Education: Handbook of Theory
and Research (pp. 41–122). Netherlands: Springer.
Ibarra, R.A. (2001). Beyond Affirmative Action: Reframing the Context of Higher Education.
Madison: University of Wisconsin Press.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

STRIVE FOR EQUITY, DIVERSITY, AND INCLUSION 107

Ladson-Billings, G. (1995). Toward a theory of culturally relevant pedagogy. American Edu-


cational Research Journal, 32(3), 465–491.
Lenaburg, L., Aguirre, O., Goodchild, F., and Kuhn, J.U. (2012). Expanding pathways: A sum-
mer bridge program for community college STEM students. Community College Journal
of Research and Practice, 36(3), 153–168.
Malcom, S.M., Aldrich, M., Hall, P.Q., and Stern, V. (1984). Equity and Excellence: Com-
patible Goals. Washington, DC: American Association for the Advancement of Science.
Malcom-Piqueux, L.E., and Malcom, S.M. (2013). Engineering diversity: Fixing the educa-
tional system to promote equity. The Bridge, 43(1), 24–34.
Marin, P. (2000). The educational possibility of multi-racial/multi-ethnic college classrooms.
In Does Diversity Make a Difference? Three Research Studies on Diversity in College
Classrooms (pp. 61–83). Washington, DC: American Council on Education and Ameri-
can Association of University Professors.
Martin, D.B. (2009). Researching race in mathematics education. Teachers College Record,
111(2), 295–338.
McGee, E.O., and Martin, D.B. (2011). “You would not believe what I have to go through
to prove my intellectual value!” Stereotype management among academically successful
Black mathematics and engineering students. American Educational Research Journal,
48(6), 1347–1389.
Milem, J.F. (2001). Increasing diversity benefits: How campus climate and teaching methods
affect student outcomes. In G. Orfield and M. Kurlaender (Eds.), Diversity Challenged:
Evidence on the Impact of Affirmative Action (pp. 233–249). Cambridge, MA: Harvard
Education Publishing Group.
Milem, J.F. (2003). The educational benefits of diversity: Evidence from multiple sectors. In
M. Chang, D. Witt, J. Jones, and K. Hakuta (Eds.), Compelling Interest: Examining the
Evidence on Racial Dynamics in Colleges and Universities (pp. 126–169). Stanford, CA:
Stanford University Press.
Milem, J.F., Chang, M.J., and Antonio, A.L. (2005). Making Diversity Work on Cam-
pus: A Research-Based Perspective. Washington, DC: Association American Colleges &
Universities.
Mooney, G.M., and Foley, D.J. (2011). Community Colleges: Playing an Important Role in
the Education of Science, Engineering, and Health Graduates. Arlington, VA: National
Science Foundation.
Moses, R., Kamii, M., Swap, S.M., and Howard, J. (1989). The algebra project: Organizing
in the spirit of Ella. Harvard Educational Review, 59(4), 423–444.
Nasir, N.S., Cabana, C., Shreve, B., Woodbury, E., and Louie, N. (Eds.). (2014). Mathematics
for Equity: A Framework for Successful Practice. New York: Teachers College Press.
National Academies of Sciences, Engineering, and Medicine. (2016). Barriers and Opportuni-
ties for 2-Year and 4-Year STEM Degrees: Systemic Change to Support Diverse Student
Pathways. Washington, DC: The National Academies Press. doi: 10.17226/21739.
National Academies of Sciences, Engineering, and Medicine (2017). Undergraduate Research
Experiences for STEM Students: Successes, Challenges, and Opportunities. Washington,
DC: The National Academies Press.
National Center for Education Statistics. (2016). List of 2016 digest tables. Digest of Educa-
tion Statistics, 2016. Washington, DC: U.S. Department of Education. Available: https://
nces.ed.gov/programs/digest/2016menu_tables.asp [July 2017].
National Research Council. (2012). Discipline-Based Education Research: Understanding and
Improving Learning in Undergraduate Science and Engineering. Washington, DC: The
National Academies Press.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

108 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

National Research Council. (2013). Monitoring Progress Toward Successful K-12 STEM
Education: A Nation Advancing? Washington, DC: The National Academies Press. doi:
10.17226/13509.
National Science Foundation. (2016). Science and Engineering Indicators 2016. Arlington, VA:
Author. Available: https://www.nsf.gov/statistics/2016/nsb20161/#/ [July 2017].
National Science Foundation. (2017). Women, Minorities, and Persons with Disabilities in
Science and Engineering: 2017 (NSF 17-310). Arlington, VA: Author. Available: www.
nsf.gov/statistics/wmpd [July 2017].
National Survey of Student Engagement. (2016). NSSE 2016 High-Impact Practices: U.S.
Summary Percentages by Student Characteristics. Bloomington, IN: National Survey of
Student Engagement. Available: http://nsse.indiana.edu/2016_institutional_report/pdf/
HIPTables/HIP.pdf [July 2017].
Packard. B.W. (2016). Successful STEM Mentoring Initiative for Underrepresented Students:
A Research-Based Guide for Faculty and Administrators. Sterling, VA: Stylus.
Paris, D. (2012). Culturally sustaining pedagogy: A needed change in stance, terminology, and
practice. Educational Researcher, 41(3), 93–97.
Ponjuan, L. (2006). A national study of job satisfaction of faculty of color in doctoral institu-
tions. Journal of the Professoriate, 1(1), 45–70.
President’s Council of Advisors on Science and Technology. (2012). Engage to Excel: Produc-
ing One Million Additional College Graduates with Degrees in Science, Technology,
Engineering, and Mathematics. Washington, DC: Author. Available: https://obamawhite
house.archives.gov/sites/default/files/microsites/ostp/pcast-engage-to-excel-final_2-25-12.
pdf [July 2017].
Rosser, V.J. (2004). Faculty members’ intentions to leave: A national study on their work life
and satisfaction. Research in Higher Education, 45(3), 285–309.
Rutschow, E.Z., and Diamond, J. (2015). Laying the Foundations: Early Findings from the
New Math Ways Project. New York: MDRC.
Salzman, H., and Van Noy, M. (2014). Crossing the Boundaries: STEM Students in Four-Year
and Community Colleges. Paper prepared for the Committee on Barriers and Opportuni-
ties in Completing 2- and 4-Year STEM Degrees. Available: http://sites.nationalacademies.
org/cs/groups/dbassesite/documents/webpage/dbasse_089924.pdf [July 2017].
Sanderson, A., Phua, V.C., and Herda, D. (2000). The American Faculty Poll. Chicago, IL:
National Opinion Research Center.
Smith, D.G. (2015). Diversity’s Promise for Higher Education: Making It Work. Baltimore,
MD: Johns Hopkins University Press.
Speroni, C. (2011). Determinants of Students’ Success: The Role of Advanced Placement and
Dual Enrollment Programs (NCPR Working Paper). New York: National Center for
Postsecondary Research.
Turner, C.S.V., and González, J.C. (2011). Faculty women of color: The critical nexus of race
and gender. Journal of Diversity in Higher Education, 4(4), 199–211.
Umbach, P.D. (2006). The contribution of faculty of color to undergraduate education. Re-
search in Higher Education, 47(3), 317–345.
Van Noy, M., and Zeidenberg, M. (2014). Hidden STEM Knowledge Producers: Community
Colleges’ Multiple Contributions to STEM Education and Workforce Development.
Paper prepared for the Committee on Barriers and Opportunities in Completing 2- and
4-Year STEM Degrees. Available: http://sites.nationalacademies.org/cs/groups/dbassesite/
documents/webpage/dbasse_088831.pdf [July 2017].
Wang, X. (2013). Modeling entrance into STEM fields of study among students beginning
at community colleges and four-year institutions. Research in Higher Education, 54(6),
664–692.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

STRIVE FOR EQUITY, DIVERSITY, AND INCLUSION 109

Wang, X. (2015). Pathway to a baccalaureate in STEM fields: Are community colleges a vi-
able route and does early STEM momentum matter? Educational Evaluation and Policy
Analysis, 37(3), 376–393. doi: 10.3102/0162373714552561.
Williams, D.A., Berger, J.B., and McClendon, S.A. (2005). Toward a Model of Inclusive
Excellence and Change in Postsecondary Institutions. Washington, DC: Association of
American Colleges & Universities.
Witham, K., Malcom-Piqueux, L.E., Dowd, A.C., and Bensimon, E.M. (2015). America’s
Unmet Promise: The Imperative for Equity in Higher Education. Washington, DC: As-
sociation of American Colleges & Universities.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Goal 3: Ensure Adequate Numbers


of STEM Professionals

T
he committee recognizes that it is not possible to specify a target “ad-
equate number” of STEM professionals, given the varying demands
across the different STEM disciplines and types of occupations now
and in the future. However, the committee is nevertheless committed to
ensuring that the nation has a robust, highly talented STEM workforce.
Advancing Goal 3 will require progress toward Goal 1 (increasing
students’ mastery of STEM concepts and skills through engagement in
evidence-based STEM educational practices and programs) and Goal 2
(striving for equity, diversity, and inclusion). Progress toward Goals 1 and
2 will increase the numbers of students entering and persisting in STEM
fields and ultimately earning STEM credentials. However, achieving Goal 3
would not necessarily require that all of these increased numbers of STEM
graduates enter STEM professions. Rather, graduates would apply their
STEM knowledge, skills, and ways of thinking at work in diverse STEM
and non-STEM occupations and through civic participation, helping to
address the grand challenges facing society (see “Vision” in Chapter 1).
As a first step toward developing indicators of progress toward this
goal, the committee identified three specific objectives for advancing the
goal:

• Objective 3.1: Foundational preparation for STEM for all students


• Objective 3.2: Successful navigation into and through STEM pro-
grams of study
• Objective 3.3: STEM credential attainment

111

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

112 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

TABLE 5-1  Objectives and Indicators of Adequate Supply of STEM


Professionals
Objectives Indicators
3.1 Foundational preparation for STEM 3.1.1 Completion of foundational courses,
for all students including developmental education courses to
ensure STEM program readiness

3.2 Successful navigation into and through 3.2.1 Retention in STEM programs, course to
STEM programs of study course and year to year

3.2.2 Transfers from 2-year to 4-year STEM


programs in comparison with transfers to all
4-year programs

3.3 STEM credential attainment 3.3.1 Number of students who attain


STEM credentials over time, disaggregated
by institution type, transfer status, and
demographic characteristics

The following sections of this chapter focus on these three objectives.


Each section describes the objective and summarizes research demonstrat-
ing its importance for improving the quality and impact of undergraduate
STEM education. It then proposes indicators to monitor progress toward
the objective, discusses the availability of data for these indicators, and
identifies the additional research needed to fully develop the indicators: see
Table 5-1.

OBJECTIVE 3.1: FOUNDATIONAL PREPARATION


FOR STEM FOR ALL STUDENTS

Importance of the Objective


A broad set of skills and knowledge, often acquired through general ed-
ucation, is required to succeed in STEM classrooms. Depending on incom-
ing students’ high school preparation and the type of degree or certificate
they seek, this foundational knowledge may be acquired through a complex
array of developmental coursework, along with introductory college-level
coursework. This foundational preparation supports science literacy and
STEM credential completion by developing introductory college-level pro-
ficiencies in mathematics, English language and communication, and digital
fluency and computational thinking.
Although some students are prepared for the rigors of the STEM class-

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

ENSURE ADEQUATE NUMBERS OF STEM PROFESSIONALS 113

room, others enter college lacking foundational preparation. Many of these


students will spend a considerable amount of time catching up by taking
developmental education courses to prepare themselves for college-level
coursework (Bailey, Jeong, and Cho, 2010; Jenkins et al., 2009).

Developmental English and Mathematics


Many entering undergraduates must complete noncredit developmental
courses in English and mathematics before they can enroll in college-level
STEM courses. A national survey conducted by the National Center for
Education Statistics, which included all types of 2-year and 4-year institu-
tions, found that 20.4 percent of first-year undergraduates reported tak-
ing developmental courses in mathematics, English, or both (Sparks and
­Malkus, 2013). The highest concentration of developmental courses was
reported by students at open admissions institutions; the lowest concentra-
tion was reported by students at highly selective institutions. Participation
also varied by race: larger percentages of Black and Hispanic students re-
ported enrollment in developmental courses than did white students (Sparks
and Malkus, 2013). In another study, based on an analysis of students’
transcript data, developmental education participation rates were estimated
at 50 percent for first-time postsecondary students: they varied depending
on the selectivity of the institution the students first attended, with devel-
opmental course-taking being lowest at highly selective institutions (22%)
and highest at public 2-year institutions (68%) (Radford and Horn, 2012).
The latter estimate is similar to Bailey’s (2009) estimate that 60 percent of
all 2-year college students enroll in at least one developmental course.
The high participation in developmental English reflects the changing
demographics of the national population. In school year 2013–2014, nearly
5 million K–12 students (10 percent of total enrollment) were classified as
English learners, and they are the fastest-growing group in K–12 education
(Bailey and Carroll, 2015). Some of these students will enter higher educa-
tion with weak English reading and writing skills. Together with growing
numbers of international students, as well as some native English speakers,
they will need to successfully complete developmental English before enroll-
ing in introductory college-level STEM courses.
English proficiency is an important foundational skill for success in
undergraduate STEM. Research has shown that K–12 students’ mastery of
STEM concepts and skills and their performance on tests of STEM con-
tent are related to their levels of language proficiency (Bailey and Carroll,
2015), and similar findings are beginning to emerge in research at the
undergraduate level. For example, in looking at course performance in
introductory chemistry, Pyburn and colleagues (2013) found that language
comprehension contributed to final grades comparably with mathematics

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

114 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

ability and prior chemistry knowledge. Moreover, Parker, Adedokun, and


Weaver (2016) found that international students’ limited English language
and social skills posed a barrier to instructors’ ability to engage them in
evidence-based collaborative learning experiences.

Digital Fluency and Computational Thinking


Competence in using computers to solve problems is essential for every-
one in an increasingly digital world and is increasingly recognized as a key
proficiency for undergraduate success (e.g., Vaz, 2004). As early as 1999,
an expert committee proposed that all college graduates should develop
information technology (IT) fluency (National Research Council, 1999).
That committee proposed, in contrast to more basic computer literacy,
IT fluency requires three kinds of knowledge: contemporary skills (to use
current technology); foundational concepts (basic principles of computing,
networks, and information systems); and intellectual abilities (to apply IT
to complex situations and apply higher-level thinking). Although proposed
as goals for college graduates, these levels of proficiency are likely impor-
tant foundations for success throughout undergraduate STEM coursework,
which increasingly requires students to use technology to gather and ana-
lyze data and solve problems. More recently, researchers have explored how
to tap the digital fluency that some students possess to improve the quality
of writing in first-year disciplinary courses for nonmajors (November and
Day, 2012).
Most Americans ages 16 to 24 have limited ability to use computers
to solve complex problems relative to their peers in other nations (OECD,
2013). To address this challenge, researchers and faculty practitioners are
developing and testing instructional approaches to develop IT fluency and
computational thinking (e.g., Miller and Settle, 2011; Sardone, 2011).
Foundational courses based on the findings from this research will be es-
sential for undergraduate success in STEM.

Strengthening and Monitoring Developmental Education


Ongoing research on developmental education promises to improve
its effectiveness for supporting students’ success in college-level courses, in
STEM and in other fields. For example, a U.S. Department of Education
expert panel (Bailey et al., 2016) recently reviewed the available literature
and recommended six steps: (1) use multiple measures for placing students
into developmental classes; (2) require or incentivize students to participate
in enhanced advising; (3) offer performance-based financial incentives; (4)
redesign developmental courses by compressing the material or integrat-
ing it with other content in college-level offerings; (5) teach students self-

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

ENSURE ADEQUATE NUMBERS OF STEM PROFESSIONALS 115

regulated learning skills; and (6) provide comprehensive, integrated student


support programs.
Taking such steps can begin to address an important problem identi-
fied in several studies: Many students placed into traditional developmental
mathematics classes make little progress toward success in college-level
mathematics (Bailey, Jeong, and Cho, 2010; Logue, Watanabe-Rose, and
Douglas, 2016; Valentine, Konstatopoulos, and Goldrick-Rab, 2017). In
one promising approach to address this problem, students who enrolled
in a redesigned, compressed developmental mathematics course as the first
semester in a year-long quantitative reasoning sequence were significantly
more likely to meet developmental mathematics requirements than a com-
parison group of matched students (Yamada, Bohannen, and Grunow,
2016). However, the evidence base on these recent innovations is small,
and further research is needed to strengthen developmental education and
optimize student placement and supports (Bailey et al., 2016).
While research is ongoing, many students continue to lack founda-
tional skills in mathematics, reading, and writing and continue to be
placed into developmental courses as prerequisites for entering STEM pro-
grams. The most recent data available, for 2011/2012, show that one-third
(32.3%) of first-year student had enrolled in at least one developmental
education course. Within this average, the rate of developmental course-
taking varied by type of institution, from a high of 40 percent at public
2-year institutions to a low of 15 percent at private, nonprofit doctoral
granting institutions (U.S. Department of Education, 2014). Given this
broad scope, it is important to monitor students’ progress through devel-
opmental education through the indicator proposed below.

Proposed Indicator

Indicator 3.1.1: Completion of Foundational Courses, Including


Developmental Education Courses, to Ensure STEM Program Readiness
This indicator is designed to illuminate the extent to which students
are making progress through and completing foundational coursework.
This foundational coursework will prepare students for success in STEM
programs of study or develop general STEM knowledge and skills (some-
times referred to as STEM literacy—see Chapter 1) that may be valuable in
whatever major program they choose and, after graduation, in their careers,
home lives, and civic participation.
The proposed indicator would follow students who enter 2-year and 4-
year institutions as they complete coursework in mathematics and English
language and communications, and digital fluency. It would track students’
progress through developmental coursework in these subjects and their

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

116 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

subsequent entry into and completion of the corresponding college-level


foundational coursework. This indicator could be measured by tracking
the fraction of students successfully completing developmental course(s) or
foundational courses in comparison with students that attempted to com-
plete these courses (i.e., the pass rate in developmental through college-level
introductory English and in developmental mathematics through college-
level calculus). It would be disaggregated by institution type, gender, race
and ethnicity, disability status, socioeconomic status, and first-generation
status.
In the future, as research provides better understanding of the compe-
tencies needed for students to succeed in STEM, this indicator could also be
expanded to include completion of key preparatory courses in the sciences.
Colleges increasingly require students to complete such preparatory courses
as introductory computer science, introductory chemistry, and introductory
biology, especially for students whose K–12 exposure to these subjects is
either limited or occurred many years before college entry. Florida Inter-
national University, for example, offers a 2-credit-hour fundamentals of
chemistry course designed to develop scientific and computational skills
in preparation for college-level chemistry. A recent analysis found that
students who completed this course in their first semester showed good to
better-then-average performance in chemistry I the following semester, de-
spite initially lower scores on the mathematics placement test (Association
of Public and Land-Grant Universities, 2017).

OBJECTIVE 3.2: SUCCESSFUL NAVIGATION INTO


AND THROUGH STEM PROGRAMS OF STUDY

Importance of the Objective


Students take a variety of paths to completing a STEM program, often
transferring between institutions, stopping for a period, and switching into
or out of STEM majors (National Academies of Sciences, Engineering, and
Medicine, 2016). They pursue a range of different STEM credentials, in-
cluding degrees and certificates, at different types of 2-year and 4-year insti-
tutions (e.g., research university, liberal arts college, nonprofit or for-profit
2-year college). Given this variety of pathways, it is important to provide
students with clear guidance on program requirements and to remove as
many barriers as possible to continuing in STEM (for those already in the
field) or switching into a STEM program. When students are offered too
many choices without adequate guidance, they may enroll in a wide variety
of courses, accumulating credits without progressing toward a credential
(Scott-Clayton, 2011).
Studies of student pathways in 2-year institutions suggest that those

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

ENSURE ADEQUATE NUMBERS OF STEM PROFESSIONALS 117

institutions can best facilitate student success by redesigning curriculum, in-


struction, and student supports around coherent programs of study (Bailey,
Jaggars, and Jenkins, 2015). A growing body of research suggests that this
approach, “guided pathways,” improves retention and completion of cre-
dentials (Grant and Dweck, 2003; Jenkins and Weiss, 2011). Based on these
findings, Bailey, Jaggars, and Jenkins (2015) suggest that 2-year institutions
proactively assign new students to a program of study, based on individual
counseling about student goals, interests, and aptitudes. Guided pathways
may be especially important in STEM programs, which typically require
specific course sequences for 2-year and 4-year degrees.
Successful navigation into and through STEM programs of study will
also require improvement in introductory courses in STEM disciplines.
Many students who intend to major in STEM later switch to a non-STEM
course of study or leave higher education (e.g., Chen, 2009), and this at-
trition happens most frequently during the time when students are taking
introductory STEM courses. Students may decide to switch to another ma-
jor because they discover that their intended STEM discipline is irrelevant
to their interests, as a natural part of early college exploration. However,
a growing body of research suggests that the way introductory courses are
taught is a significant factor that discourages students from continuing in
STEM (National Academies of Sciences, Engineering, and Medicine, 2016).
Traditionally, some faculty members view introductory courses as an oppor-
tunity to “weed out” students they perceive as not capable of completing
a STEM degree, and so design their courses for that function. Yet studies
have shown that many capable students left STEM majors because they
found those courses dull and unwelcoming (Seymour and Hewett, 1997;
Tobias, 1990). In addition, some instructors and departments grade on a
curve rather than students’ actual content knowledge, which can discourage
students from continuing in STEM. Researchers have found that students
may be discouraged from continuing in STEM majors because they receive
higher grades in courses outside of STEM (Ost, 2010; Rask, 2010; Seymour
and Hewitt, 1997; Stinebrickner and Stinebrickner, 2013).
Many studies have shown that students’ negative experiences in intro-
ductory courses reduce the likelihood of completing a STEM major (Astin
and Astin, 1992; Barr, Gonzalez, and Wanat, 2008; Crisp, Nora, and
Taggart, 2009; Eagan et al., 2011; Mervis, 2010; Seymour, 2001; Seymour
and Hewitt, 1997; Thompson et al., 2007). For example, Barr, Gonzalez,
and Wanat (2008) found that negative experiences early in introductory
chemistry courses were a critical factor in minority students’ waning inter-
est in premedical studies.
In another example, the introductory calculus sequence that is gener-
ally required for a 4-year STEM degree can be a barrier to completing the
degree. A recent survey of more than 14,000 introductory calculus students

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

118 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

across a representative sample of 2-year and 4-year institutions found


that students’ confidence in their mathematical abilities and enjoyment of
mathematics declined from the beginning to the end of the term (Bressoud,
Mesa, and Rasmussen, 2015). In a further analysis of the survey data, Ellis,
Fosdick, and Rasmussen (2016) found that women started and ended the
term with significantly lower mathematical confidence than men, and wom-
en’s likelihood of not continuing to calculus II was 1.5 times higher than
that for men. This finding suggests that lack of mathematical confidence,
rather than lack of mathematically ability, may be responsible for the high
departure rate of women. By choosing not to continue in calculus, women
are leaving the pathway to a 4-year STEM degree, adding to the workforce
gender gap in STEM fields, such as engineering and computer science. The
authors estimated that if women persisted in STEM at the same rate as men
starting in calculus I, the number of women entering the STEM workforce
would increase by 75 percent (Ellis, Fosdick, and Rasmussen, 2016).
The findings of Ellis, Fosdick, and Rasmussen (2016) echo Correll’s
(2001) seminal study of high school students, which found that cultural
beliefs about gender negatively bias women’s self-assessments of their math-
ematics competence: men were more likely than women with the same
mathematics grades and test scores to perceive that they were mathemati-
cally competent. For both genders, higher self-assessments of mathematical
competence were associated with a higher likelihood of enrolling in high
school calculus and selecting a quantitative college major (e.g., STEM).
Women’s lower perceptions of their competence relative to men were associ-
ated with a lower likelihood of enrolling in calculus and a lower likelihood
of selecting a quantitative college major.
As discussed in Chapter 3, research has begun to illuminate new,
evidence-based approaches that are being applied to redesign and improve
these gateway courses. To address the problems in introductory calculus,
for example, Bressoud, Mesa, and Rasmussen (2015) recommend strategies
to improve calculus teaching and learning, including “ambitious” teaching,
new curricula, student supports, and training of graduate instructors. At
the same time, five undergraduate mathematics associations recently re-
leased a common vision for improving courses and programs that calls for
scaling up evidence-based teaching approaches (Saxe and Braddy, 2016).
These developments reinforce the importance of engaging more students in
evidence-based STEM educational practices (see Chapter 3) as a critical step
toward increasing the numbers of students who earn STEM credentials.
Helping more students successfully navigate into and through STEM
programs also requires establishing articulation programs to smooth trans-
fer pathways between 2-year and 4-year STEM programs (National Acad-
emies of Sciences, Engineering, and Medicine, 2016). When faculty and
administrators at 4-year institutions do not clearly communicate with their

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

ENSURE ADEQUATE NUMBERS OF STEM PROFESSIONALS 119

counterparts at 2-year institutions about the expectations and requirements


for a 4-year STEM degree, 2-year students may not select the courses neces-
sary to transfer. Guided pathways may be especially important for student
success in STEM fields, which typically require college-level mathematics
and specific course sequences. Completion of STEM coursework in the first
2 years of college is related to persistence in STEM (Bettinger, 2010) and
may also ensure that students who transfer from 2-year to 4-year institu-
tions or between 4-year institutions can still complete their degrees in a
timely fashion.
Reducing the barriers posed by rigid course sequences could also help
more students successfully navigate into and through STEM majors. For
example, students are less likely to migrate into an engineering field than to
other STEM fields. Measured at the eighth semester, only 7 percent of engi-
neering students had migrated into this field, compared with 30–60 percent
in other STEM fields (Eagan et al., 2014; Ohland et al., 2008). The engi-
neering accrediting agency specifies that engineering programs must provide
1 year of a combination of mathematics and basic sciences (Accreditation
Board for Engineering and Technology, Inc., 2016), and students typically
spend most of their first year in these prerequisite courses. A second-year
or third-year student who is interested in engineering may be discouraged
from migrating into the field by the prospect of having to complete these
prerequisite courses. Although efforts to redesign the entry-level curriculum
to increase student interest and attract more students to engineering majors
have been under way for more than two decades (Director et al., 1995),
they have not yet encouraged many students to switch into engineering
from another field.

Proposed Indicators

Indicator 3.2.1: Retention in STEM Degree or Certificate Programs,


Course to Course and Year to Year
To measure progress toward successful navigation into and through
STEM programs, this proposed indicator would measure the extent to
which students are making timely progress toward completing STEM cre-
dentials. This indicator is designed to follow the progression of traditional
and nontraditional students through the many pathways they can take to
pursue STEM credentials at 2-year and 4-year institutions, including trans-
ferring across institutions and taking courses from multiple institutions at
the same time. It would be disaggregated by students’ demographic char-
acteristics (gender, race and ethnicity, socioeconomic status and disability
status).

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

120 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Indicator 3.2.2: Transfers from 2-Year to 4-Year STEM Programs in


Comparison with Transfers to All 4-Year Programs
This proposed indicator would provide information on the proportion
of all transfer students who enter 4-year STEM programs of study, primar-
ily focusing on transfers between 2-year and 4-year degree programs. It
would measure the percentage of transfer students (disaggregated by institu-
tion type) who enter 4-year degree programs in STEM fields in comparison
with the percentage of all transfer students (disaggregated by institution
type) who enter 4-year degree programs, across all fields of study. This
indicator would need to be disaggregated by race and ethnicity, gender,
socioeconomic status, disability status, and first-generation status. With this
measure, it will also be important to compare the extent to which transfer
students enter at full third-year status (i.e., have the appropriate lower
­division coursework) relative to the third-year status of students that began
their degrees at 4-year institutions.
Comparing the credit accumulation of transfer students with that of
“native” junior students is an important component of this indicator be-
cause research indicates that loss of credit is a major barrier to success
for transfer students (National Academies of Sciences, Engineering, and
Medicine, 2016). Overall, 25 percent of 2-year students transfer to 4-year
institutions, and, among these, 62 percent successfully complete a bach-
elor’s degree (Jenkins and Fink, 2015).
The importance of credit accumulation is illustrated by Monaghan and
Attewell’s (2015) analysis of data from a nationally representative sample
of students. The authors found that less than 60 percent of incoming trans-
fer students were able to transfer most of their credits, and about 15 percent
transferred almost no credits. Those who were able to transfer most of their
credits were 2.5 times more likely to earn a 4-year degree than those who
were able to transfer less than half of their credits. Comparing various fac-
tors that might detract transfer students from completing a 4-year degree,
the authors found that loss of credits when they transfer was strongly
related to not completing a degree. Competing explanations—including
lowered academic expectations based on attending a 2-year college, the
vocational focus of some 2-year college programs, and the potentially lower
rigor of 2-year colleges—were unrelated to failure to complete a 4-year
degree (Monaghan and Attewell, 2015).
Although some early studies found that transfer students experienced
“transfer shock” in the form of lower grade point averages after entering a
4-year institution (e.g., Hills, 1965), more recent and rigorous studies find
that this effect does not seem to persist (e.g., Carlan and Byxbe, 2000) and
that transfer students are as likely to graduate as those who are “native” to
the 4-year institution (e.g., Glass and Harrington, 2002; Melguizo, Kienzl,

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

ENSURE ADEQUATE NUMBERS OF STEM PROFESSIONALS 121

and Alfonso, 2011). For example, Bowen, Chingos, and McPherson (2009)
found that 2-year students who transferred to a public flagship university
were as likely to graduate as those who started there, and those who trans-
ferred to less selective public 4-year institutions had a greater chance of
graduating than native students.
Different types of 4-year institutions vary in their acceptance of transfer
credits, with public institutions accepting the most credits. Simone (2014)
found that transfer students who entered private nonprofit institutions
transferred 21 percent fewer credits than those who entered public institu-
tions, and those who entered private for-profit institutions transferred 52
percent fewer credits. This variation, in turn, influences completion rates:
transfer students’ completion of bachelor’s degrees is highest (65%) at
public institutions, followed by private nonprofit institutions (60%), and
lowest at private for-profit institutions (35%).

OBJECTIVE 3.3: STEM CREDENTIAL ATTAINMENT

Importance of the Objective


As discussed in Chapter 1, attainment of STEM credentials is impor-
tant to both individuals and the nation. Because scientists, engineers, and
other STEM professionals play a critical role in innovation and national
economic growth (Xie and Killewald, 2012), the President’s Council of
Advisors on Science and Technology (2012) recommended that institutions
work to retain more students in STEM to complete 4-year degrees. Attain-
ment of STEM credentials is also valuable for individuals. Relative to the
general U.S. workforce, people with a STEM credential at any level (cer-
tificate, 2-year degree, or 4-year degree) enjoy a wage premium (National
Science Foundation, 2015).

Proposed Indicator

Indicator 3.3.1 Percentage of Students Who Attain STEM Credentials


over Time, Disaggregated by Institute Type, Transfer Status, and
Demographic Characteristics
This proposed indicator would measure the completion of STEM cre-
dentials (degrees and certificates) overall in comparison with credentials
earned across all fields, disaggregated by institution type, and demographic
characteristics (including gender, race and ethnicity, socioeconomic status,
and ability status). In addition, to provide information on the outcomes for
students who transfer into 4-year STEM programs, the indicator would be
disaggregated by students’ transfer status. It would follow the percentage of

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

122 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

STEM credentials relative to credentials across all fields, measured by time


to degree at 100 percent (2 or 4 years), 150 percent, and 200 percent. This
indicator would include credentials earned by students who pursue degrees
at 2-year and 4-year institutions and those who transfer among institutions
or take courses from multiple institutions to complete their degrees.

REFERENCES
Accreditation Board for Engineering and Technology, Inc. (2016). Criteria for Accrediting En-
gineering Programs. Baltimore, MD: Author. Available: http://www.abet.org/wp-content/
uploads/2015/10/E001-16-17-EAC-Criteria-10-20-15.pdf [July 2016].
Association of Public and Land-grant Universities. (2017). Getting FIU Students Chem-
istry Ready. Available: http://www.aplu.org/projects-and-initiatives/accountability-and-
transparency/using-data-to-increase-student-success/APLU_WhitePaper_Florida_C.pdf
[September 2017].
Astin, A.W., and Astin, H.S. (1992). Undergraduate Science Education: The Impact of Dif-
ferent College Environments on the Educational Pipeline in the Sciences. Final Report.
Washington, DC: National Science Foundation.
Bailey, A.L., and Carroll, P.E. (2015). Assessment of English language learners in era of new
academic content standards. Review of Research in Education, 39(1), 253–294.
Bailey, T. (2009). Rethinking developmental education in community college. Community
College Research Center Brief, 40. Available: http://ccrc.tc.columbia.edu/media/k2/
attachments/rethinking-developmental-education-in-community-college-brief.pdf [July
2017].
Bailey, T., Jeong, D.W., and Cho, S.-W. (2010). Referral, enrollment, and completion in devel-
opmental education sequences in community colleges. Economics of Education Review,
29(2), 255–270.
Bailey, T., Bashford, J., Boatman, A., Squires, J., Weiss, M., Doyle, W., Valentine, J.C., LaSota,
R., Polanin, J.R., Spinney, E., Wilson, W., Yeide, M., and Young, S.H. (2016). Strategies
for Postsecondary Students in Developmental Education: A Practice Guide for College
and University Administrators, Advisors, and Faculty. Washington, DC: Institute of
Education Sciences, What Works Clearinghouse.
Bailey, T.R., Jaggars, S.S., and Jenkins, D. (2015). Redesigning America’s Community Colleges:
A Clearer Path to Student Success. Cambridge, MA: Harvard University Press.
Barr, D.A., Gonzalez, M.E., and Wanat, S.F. (2008). The leaky pipeline: Factors associated
with early decline in interest in premedical studies among underrepresented minority
undergraduate students. Academic Medicine, 83(5), 503–511.
Bettinger, E. (2010). To be or not to be: Major choices in budding scientists. In C.T. Clotfelter
(Ed.), American Universities in a Global Market (pp. 69–98). Chicago, IL: University of
Chicago Press.
Bowen, W.G., Chingos, M.M., and McPherson, M.S. (2009). Crossing the Finish Line: Com-
pleting College at America’s Public Universities. Princeton, NJ: Princeton University
Press.
Bressoud, D., Mesa, V., and Rasmussen, C. (2015). Insights and Recommendations from
the MAA National Study of College Calculus. Washington, DC: Mathematical Associa-
tion of America Press. Available: http://www.maa.org/sites/default/files/pdf/cspcc/Insights
andRecommendations.pdf [April 2016].
Carlan, P.E., and Byxbe, F.R. (2000). Community colleges under the microscope: An analysis
of performance predictors for native and transfer students. Community College Review,
28(2), 27–42.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

ENSURE ADEQUATE NUMBERS OF STEM PROFESSIONALS 123

Chen, X. (2009). Students Who Study Science, Technology, Engineering, and Mathematics
(STEM) in Postsecondary Education. (NCES 2009-161). Washington, DC: U.S. Depart-
ment of Education, National Center for Education Statistics. Available: https://nces.
ed.gov/pubs2009/2009161.pdf [July 2017].
Correll, S.J. (2001). Gender and the career choice process: The role of biased self-assessments.
American Journal of Sociology, 106 (6), 1691–1730.
Crisp, G., Nora, A., and Taggart, A. (2009). Student characteristics, pre-college, college, and
environmental factors as predictors of majoring in and earning a STEM degree: An analy-
sis of students attending a Hispanic serving institution. American Educational Research
Journal, 46(4), 924–942.
Director, S.W., Khosla, P.K., Rohrer, R.A., and Rutenbar, R.A. (1995). Reengineering the cur-
riculum: Design and analysis of a new undergraduate electrical and computer engineer-
ing degree at Carnegie Mellon University. Proceedings of the Institute of Electrical and
Electronics Engineers, 83(9), 1246–1269.
Eagan, K., Herrera, F.A., Garibay, J.C., Hurtado, S., and Chang, M. (2011). Becoming STEM
Protégés: Factors Predicting the Access and Development of Meaningful Faculty–Student
Relationships. Presented at Association for Institutional Research Annual Forum, To-
ronto, Ontario, May 24.
Eagan, K., Hurtado, S, Figueroa, T., and Hughes, B. (2014). Examining STEM Pathways
among Students Who Begin College at Four-Year Institutions. Paper prepared for the
Committee on Barriers and Opportunities in Completing 2- and 4-Year STEM Degrees.
Available: http://sites.nationalacademies.org/cs/groups/dbassesite/documents/webpage/
dbasse_088834.pdf [July 2017].
Ellis, J., Fosdick, B.K., and Rasmussen, C. (2016). Women 1.5 times more likely to leave STEM
pipeline after calculus compared to men: Lack of mathematical confidence a potential cul-
prit. PLOS ONE, 11(7). Available: http://journals.plos.org/plosone/article?id=10.1371/
journal.pone.0157447 [May 2016].
Glass, J.C., and Harrington, A.R. (2002). Academic performance of community college trans-
fer students and “native” students at a large state university. Journal of Research and
Practice, 26, 415–430.
Grant, H., and Dweck, C.S. (2003). Clarifying achievement goals and their impact. Journal
of Personality and Social Psychology, 85(3), 541–553.
Hills, J. (1965) Transfer shock: The academic performance of the transfer student. The Journal
of Experimental Education, 33(3) (Spring, 1965). (ERIC Document Reproduction Service
No. ED 010 740).
Jenkins, D., and Fink, J. (2015). What We Know about Transfer. New York, NY: Columbia
University, Teachers College, Community College Research Center. Available: https://
ccrc.tc.columbia.edu/media/k2/attachments/what-we-know-about-transfer.pdf [October
2017].
Jenkins, D., and Weiss, M.J. (2011). Charting Pathways to Completion for Low-Income
Community College Students. (CCRC Working Paper No. 34). New York: Community
College Research Center.
Jenkins, D., Jaggars, S.S., Roksa, J., Zeidenberg, M., and Cho, S.-W. (2009). Strategies for
Promoting Gatekeeper Course Success Among Students Needing Remediation: Research
Report for the Virginia Community College System. New York: Columbia University,
Teachers College, Community College Research Center.
Logue, A.W., Watanabe-Rose, M., and Douglas, D. (2016) Should students assessed as need-
ing remedial mathematics take college-level quantitative courses instead? A randomized
controlled trial. Educational Evaluation and Policy Analysis, 38(3), 1–21.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

124 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Melguizo, T., Kienzl, G.S., and Alfonso, M. (2011). Comparing the educational attainment of
community college transfer students and four-year college rising juniors using propensity
score matching methods. Journal of Higher Education, 82(3), 265–291.
Mervis, J. (2010). Better intro courses seen as key to reducing attrition of STEM majors. Sci-
ence, 330(6002), 306.
Miller, C., and Settle, A. (2011) When practice doesn’t make perfect: Effects of task goals on
learning computing concepts. Association for Computing Machinery Transactions on
Computing Education, 11(4).
Monaghan, D.B. and Attewell, P. (2015). The community college route to the bachelor degree.
Educational Evaluation and Policy Analysis, 37(1), 70–91. Available: http://journals.
sagepub.com/doi/pdf/10.3102/0162373714521865 [September 2017].
National Academies of Sciences, Engineering, and Medicine. (2016). Barriers and Opportuni-
ties for 2-Year and 4-Year STEM Degrees: Systemic Change to Support Diverse Student
Pathways. Washington, DC: The National Academies Press. doi:10.17226/21739.
National Research Council. (1999). Being Fluent with Information Technology. Washington,
DC: National Academy Press. Available: http://www.nap.edu/openbook.php?record_
id=6482 [July 2017].
National Science Foundation. (2015). Revisiting the STEM Workforce: A Companion to Sci-
ence and Engineering Indicators 2014. Arlington, VA: Author. Available: http://www.nsf.
gov/pubs/2015/nsb201510/nsb201510.pdf [March 2016].
November, N., and Day, K. (2012). Using undergraduates’ digital literacy skills to improve
their discipline-specific writing: A dialog. International Journal for the Scholarship of
Teaching and Learning, 6(2), Article 5. Available: http://digitalcommons.georgiasouthern.
edu/ij-sotl/vol6/iss2/5 [July 2017].
OECD. (2013). Skilled for Life? Key Findings from the Survey of Adult Skills. Paris: Author.
Available: http://www.oecd.org/site/piaac/SkillsOutlook_2013_ebook.pdf [May 2016].
Ohland, M.W., Sheppard, S.D., Lichtenstein, G., Eris, O., and Chachra, D. (2008). Persistence,
engagement, and migration in engineering programs. Journal of Engineering Education,
7(3), 259–278.
Ost, B. (2010). The role of peers and grades in determining major persistence in the sciences.
Economics of Education Review, 29(6), 923–934.
Parker, L.C., Adedokun, O., and Weaver, G.C. (2016). Culture, policy and resources: Barriers
reported by faculty implementing course reform. In G.C. Weaver, W.D. Burgess, A.L.
Childress, and L. Slakey, (eds.), Transforming Institutions: Undergraduate STEM Educa-
tion for the 21st Century. West Lafayette, IN: Purdue University Press.
President’s Council of Advisors on Science and Technology. (2012). Engage to Excel: Producing
One Million Additional College Graduates with Degrees in Science, Technology, Engi-
neering and Mathematics. Washington, DC: Author. Available: https://­obamawhitehouse.
archives.gov/sites/default/files/microsites/ostp/pcast-engage-to-excel-final_2-25-12.pdf [July
2017].
Pyburn, D.T., Pazicni, S., Benassi, V.A., and Tappin, E.E. (2013). Assessing the relationship
between language comprehension and performance in general chemistry. Chemistry
Education Research and Practice, 14, 524–541.
Radford, A.W., and Horn, L. (2012). An Overview of Classes Taken and Credits Earned
by Beginning Postsecondary Students (Web Tables, NCES 2013–151rev). Washington,
DC: U.S. Department of Education, National Center for Education Statistics. Available:
https://nces.ed.gov/pubs2013/2013151rev.pdf [July 2017].
Rask, K. (2010). Attrition in STEM fields at a liberal arts college: The importance of grades
and pre-collegiate preferences. Economics of Education Review, 29(6), 892−900.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

ENSURE ADEQUATE NUMBERS OF STEM PROFESSIONALS 125

Sardone, N.B. (2011). Developing information technology (IT) fluency in college students: An
investigation of learning environments and learner characteristics. Journal of Information
Technology Education, 10, 101–122.
Saxe, K., and Braddy, L. (2016). A Common Vision for Undergraduate Mathematical Sciences
Programs in 2025. Washington, DC: Mathematical Association of America. Available:
http://www.maa.org/sites/default/files/pdf/CommonVisionFinal.pdf [May 2017].
Scott-Clayton, J. (2011). The Shapeless River: Does a Lack of Structure Inhibit Students’
Progress at Community Colleges? (CCRC Working Paper No. 25). New York: Columbia
University, Teachers College, Community College Research Center.
Seymour, E. (2001). Tracking the processes of change in U.S. undergraduate education in
science, mathematics, engineering, and technology. Science Education, 86(1), 79–105.
Seymour, E., and Hewitt, N. (1997). Talking About Leaving: Why Undergraduates Leave the
Sciences. Boulder, CO: Westview Press.
Simone, S.A. (2014). Transferability of Postsecondary Credit Following Student Transfer or
Coenrollment. (NCES 2014-163). Washington, DC: U.S. Department of Education, In-
stitute of Education Sciences, National Center for Education Statistics. Available: https://
nces.ed.gov/pubs2014/2014163.pdf [September 2017].
Sparks, D., and Malkus, N. (2013). First-year undergraduate remedial coursetaking:
1999−2000, 2003−04, 2007−08. Statistics in Brief. Washington, DC: Institute of Educa-
tion Statistics, U.S. Department of Education, National Center for Education Statistics.
Available: http://nces.ed.gov/pubs2013/2013013.pdf [June 2016].
Stinebrickner, R., and Stinebrickner, T.R. (2013). A Major in Science? Initial Beliefs and Final
Outcomes for College Major and Dropout. (Centre for Human Capital and Productiv-
ity Working Papers, 2013–2014). London, ON: Department of Economics, University
of Western Ontario. Available: http://ir.lib.uwo.ca/cgi/viewcontent.cgi?article=1093and
context=economicscibc [July 2016].
Thompson, P.W., Castillo-Chavez, C., Culbertson, R.J., Flores, A., Greeley, R., Haag, S., Rose,
S.D., and Rutowski, R.L. (2007). Failing the Future: Problems of Persistence and Reten-
tion in Science, Technology, Engineering, and Mathematics (STEM) Majors at Arizona
State University. Phoenix: Arizona State University.
Tobias, S. (1990). They’re Not Dumb, They’re Different: Stalking the Second Tier. Tucson,
AZ: Research Corporation.
U.S. Department of Education. (2014). Percentage of First-Year Undergraduate Students Who
Reported Taking Remedial Education Courses, by Selected Student and Institution Char-
acteristics: 2003–04, 2007–08, and 2011–12 [Data file]. Institute of Education Sciences,
National Center for Education Statistics. Available: https://nces.ed.gov/programs/digest/
d15/tables/dt15_311.40.asp [October 2017].
Valentine, J.C., Konstatopoulos, S., and Goldrick-Rab, S. (2017). What happens to students
placed in developmental education? A meta-analysis of regression discontinuity studies.
Review of Educational Research, 87(4), 806–833. Available: http://journals.sagepub.com/
doi/pdf/10.3102/0034654317709237 [October 2017].
Vaz, R. (2004). The promise of computer literacy. Liberal Education, 90(4). Available: https://
www.aacu.org/publications-research/periodicals/promise-computer-literacy [July 2017].
Xie, Y., and Killewald, A.A. (2012). Is American Science in Decline? Cambridge, MA: Harvard
University Press.
Yamada, H., Bohannen, A., and Grunow, A. (2016). Assessing the Effectiveness of Quant-
way: A Multilevel Model with Propensity Score Matching. Carnegie Math Pathways
Technical Report. Stanford, CA: Carnegie Foundation for the Advancement of Teaching.
Available: https://www.carnegiefoundation.org/wp-content/uploads/2016/10/Quantway_­
propensity_score_matching_10-2016.pdf [February 2018].

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Existing Data Sources and


Monitoring Systems

T
his chapter addresses the committee’s charge to review existing sys-
tems for monitoring undergraduate STEM education. The first sec-
tion provides an overview of currently available data on higher
education in STEM fields. The next two sections review public and propri-
etary data sources, respectively. The fourth section discusses existing moni-
toring systems that contain elements related to the committee’s proposed
indicators. The final section focuses directly on the committee’s indicators,
summarizing for each indicator current data sources, potential new data
sources, and the research and data development that would be required
to tap those potential sources for the purpose of ongoing monitoring of
undergraduate STEM education.

OVERVIEW
Although many different postsecondary education data sources are
available, they are limited in their ability to track students’ progress into
and through STEM programs and monitor the status of the committee’s
goals for undergraduate STEM:

• Goal 1: Increase students’ mastery of STEM concepts and skills by


engaging them in evidence-based STEM educational practices and
programs.
• Goal 2: Strive for equity, diversity, and inclusion of STEM students
and instructors by providing equitable opportunities for access and
success.

127

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

128 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

• Goal 3: Ensure adequate numbers of STEM professionals by in-


creasing completion of STEM credentials as needed in the different
STEM disciplines.

The various public and proprietary data sources currently available are
summarized in Table 6-1. These data sources rely primarily on three types
of data (1) student and faculty unit record administrative data, (2) aggre-
gated institution-level data, and (3) surveys of individual students and
instructors (see Box 6-1).

TABLE 6-1  Major Sources of Data on Undergraduate STEM Education


Coverage and Feasibility of
Source Frequency Representativeness Disaggregation
Federala
IPEDS Annual Nationally Strong for race and
representative; ethnicity, gender, institution
mandatory so virtually type, and discipline; does
100% coverage not allow disaggregation
for disability and Pell grant
(socioeconomic) status

Beginning Every 6 to 8 Nationally Limited for disaggregating


Postsecondary years representative; by both demographic
Education 82% response rate in characteristics and field of
Longitudinal most recent (BPS 04/09) study
Study (BPS)

National Discontinued Nationally Strong for individual and


Survey of in 2004 representative of full- institutional characteristics
Postsecondary time, but not part-time,
Faculty faculty

Proprietaryb
National Annual 98% of institutions Limited for student
Student represented, but characteristics
Clearinghouse institutions do not
always provide
students’ demographic
characteristics,
disciplines, and degree
programs

HERIc Annual Nationally Good for 4-year student


Freshman representative of first- characteristics, but
Survey time, full-time, 4-year limited for 2-year student
students characteristics

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

EXISTING DATA SOURCES AND MONITORING SYSTEMS 129

TABLE 6-1 Continued
Coverage and Feasibility of
Source Frequency Representativeness Disaggregation

HERIc Your Annual Limited coverage of Strong for student


First College 2-year institutions characteristics; weak for
Year/Senior institutional characteristics
Survey

HERIc Faculty Every 3 years Strong coverage among Strong for faculty at 4-year
Survey 4-year nonprofit institutions
institutions; nationally
representative of full-
time faculty at 4-year
institutions

HERIc Diverse Occasional Very limited coverage Strong for student


Learning among 2-year and characteristics; weak for
Environments 4-year institutions; institutional characteristics
Survey student response rates
within institutions
average 25%

National Annual Broad coverage among Strong for student and


Survey of 4-year institutions; institutional characteristics
Student student response rates of 4-year institutions
Engagement within institutions
average 30–35%

Community Annual Moderate coverage of Limited for student


College Survey 2-year institutions; poor characteristics due to small
of Student student response rates sample sizes
Engagement

Faculty Survey Annual Limited coverage of Strong for individual


of Student 2-year institutions; characteristics of faculty at
Engagement faculty responses 4-year institutions
average 20–25%
aThesedata are publicly available.
bThese data may or may not be publicly available. Access may require intellectual property
negotiations and fees.
cHERI, Higher Education Research Institute.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

130 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

BOX 6-1
Types of Data on Postsecondary Education

Higher education policy makers, researchers, and members of the public


looking for information on higher education currently rely on three primary types
of data: institutional-level data, surveys, and student unit record data. Each has
its strengths and weaknesses.

Institutional-Level Data. College and university institutional research staff


use internal administrative data to calculate aggregated, institution-level mea-
sures for reporting to the IPEDS. The usefulness of these data for understanding
undergraduate STEM education is limited by the statistical and analytic chal-
lenges of using aggregated data as the unit of analysis.
Surveys. Surveys of students and faculty capture aspects of student and
faculty experiences, attitudes, and characteristics not collected in unit record or
institution-level data. To date, federal and proprietary survey organizations have
been able to maintain relatively high response rates in such surveys, but the
decades-long decline in response rates to all types of surveys (Brick and Williams,
2013) potentially limits the quality and usefulness of these kinds of data.
Student Unit Record Data. Colleges and universities have long collected
data on individual students for internal administrative purposes, such as registra-
tion, enrollment, and determination of financial aid eligibility. More recently, some
institutions, states, and higher education reform consortia have begun to use
these data for improvement and accountability. The data are stripped of individual
student identifiers, linked, and analyzed to help institutions and policy makers
understand trends in student progression, completion, and outcomes. Some
student unit record data systems also capture information on the demographic
characteristics of students, grades, major areas of study, graduation rates, need-
based aid eligibility, and student earnings after graduation.

The major federal system, the Integrated Postsecondary Educational


Data System (IPEDS), focuses primarily on credential attainment by full-
time students at the institution at which they began their studies. This focus
does not always match students’ trajectories through undergraduate STEM
education. For example, many undergraduate STEM students enroll part
time: a recent analysis of data on first-time students entering higher educa-
tion in 2003–2004 with plans to major in STEM found that, on average,
only 33 percent of those at 2-year institutions and 68 percent of those at
4-year institutions were enrolled full time, over the course of their studies
(Van Noy and Zeidenberg, 2014).
In terms of student trajectories, IPEDS and other datasets are not
­always aligned with time to degree and student mobility. First, students
are taking more time than expected to attain STEM credentials: Eagan and

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

EXISTING DATA SOURCES AND MONITORING SYSTEMS 131

colleagues (2014a) found that only 22 percent of first-time, full-time STEM


aspirants entering 4-year institutions in fall 2004 completed a STEM degree
within 4 years, while 52 percent completed within 6 years. Among all full-
time students (not only in STEM) who entered 2-year institutions in 2010,
only 29 percent had completed a degree or certificate within 150 percent of
the expected time (i.e., 3 years) (National Center for Education Statistics,
2015). However, an analysis of data on students entering 2-year institutions
in 2007 found that one-third transferred to 4-year institutions (either before
or after completing a degree or certificate), and 42 percent of these transfers
(or 14% of the original cohort entering in 2007) completed a bachelor’s
degree within 6 years (Jenkins and Fink, 2016). In their analysis of students
entering 2-year STEM degree programs, Van Noy and Z ­ eidenberg (2014)
found that, after 6 years, 30 percent had attained a credential or were still
enrolled in STEM, 33 percent had attained a credential or were still enrolled
in a non-STEM field, and 37 percent were neither enrolled nor had attained
a credential.
Another aspect of STEM student trajectories that is not always reflected
in current federal data sources is mobility.1 Students often transfer among
institutions and some enroll at more than one institution at the same time.
Many students take a semester or more off, rather than maintaining contin-
uous enrollment. For example, in their analysis of 4-year entrants to STEM,
Eagan and colleagues (2014a) found that about 15 percent transferred to
2-year institutions, 13 percent transferred laterally from one 4-year institu-
tion to another, and 9 percent were simultaneously enrolled in more than
one institution. The frequency of “swirling,” or movement between mul-
tiple institutions, was similar for 2-year college STEM students (National
Academies of Sciences, Engineering, and Medicine, 2016).
In addition to their limitations in measuring actual student trajectories,
existing data collection systems (national, state, and institutional) are often
not structured to gather the information needed to understand the qual-
ity of undergraduate STEM education (National Academies of Sciences,
Engineering, and Medicine, 2016). Overall, there are several reasons that
measuring students’ progress through STEM programs is difficult:

• Representative data are available only for full-time, first-time


students.
• Information on intended major when students first enrolled is only
available for 4-year students.
• Data on the quality of students’ educational experiences are very
limited.

1 IPEDS has recently expanded its data collections to include part-time students and transfer

students, as discussed further below.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

132 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

• Data on the training and qualifications of undergraduate instruc-


tors are no longer collected.
• Degree completion data only cover up to 6 years.
• Data on subgroups among Hispanics and Asian Americans are not
available.
• The sample sizes are sometimes too small for meaningful analysis
of groups, such as Native Americans, first-generation students,
veterans, and students with disabilities.

The lack of nationally representative data on student trajectories


through undergraduate STEM education results partly from policy deci-
sions. In 2005, the U.S. Department of Education’s National Center for
Education Statistics (NCES) proposed to address this data gap by expand-
ing the IPEDS database to include an individual student unit record data
system (Cunningham and Milam, 2005). With privacy and confidential pro-
tections, the system would have used administrative records of individual
students’ progress over time (enrollment status, grades, field of study, etc.).
However, Congress effectively banned the creation of any national unit-
record database in the 2008 reauthorization of the Higher Education Act
(P.L. 110-315).

PUBLIC DATA SOURCES


The committee reviewed several public data sources, considering the
frequency of data collection and release, their coverage and representative-
ness of 2-year and 4-year institutions, and the feasibility of disaggregating
the data. Disaggregation is especially important for indicators of equity,
­diversity, and inclusion. For many data sources, disaggregation by multiple
dimensions of student and institutional characteristics leads to small sample
sizes that lose statistical significance.
Federal and state sources have two major strengths for use in the pro-
posed indicator systems: their data are publicly available, and the federal
sources are generally of high quality, providing nationally representative
data that covers all types of institutions (2-year and 4-year, public, private
for profit, and private nonprofit) and all student groups. As discussed be-
low, the coverage of institution types within state systems is uneven.

The Integrated Postsecondary Education Data System


The National Center for Education Statistics (NCES) operates IPEDS
as its core postsecondary education data collection program. IPEDS is a
series of interrelated surveys that are conducted annually. Every college,
university, technical, and vocational institution that participates in the fed-

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

EXISTING DATA SOURCES AND MONITORING SYSTEMS 133

eral student financial aid programs is required under Title IV of the Higher
Education Act as amended in 1992 (P.L. 102-325) to provide data annually.
Because of this requirement, response rates are very high. For example, in
the spring 2010 data collection, the response rate for each of the survey
components was more than 99 percent (Knapp, Kelly-Reid, and Ginder,
2012). According to the NCES Handbook of Survey Methods (Burns,
Wang, and Henning, 2011), IPEDS includes the universe of postsecondary
institutions participating in federal student financial aid programs.
In 2014, about 7,300 institutions complied with the mandate to re-
spond, and an additional 200 institutions that did not participate in federal
financial aid programs voluntarily provided data (National Center for Edu-
cation Statistics, 2014). Individual institutions, or in some cases, the state
higher education systems responding on behalf of multiple institutions, pro-
vide data describing their institutional characteristics, enrollments, comple-
tions and completers, graduation rates and other outcome measures, faculty
and staff, finances, institutional prices, student financial aid, admissions,
and academic libraries. To do so, institutional research staff or administra-
tors aggregate internal administrative records (i.e., student unit record data)
to create institution-level data files and submit them to IPEDS.
IPEDS data are collected and released three times each year and are
made publicly accessible in two online platforms—the College Naviga-
tor, that can be used by students, families, educational policy makers, and
others2 and the IPEDS Data Center.3 To ensure data quality, the NCES
Statistical Standards Program publishes statistical standards and provides
methodological and statistical support to assist NCES staff and contractors
in meeting the standards, with the goal of providing high-quality, reliable,
and useful statistical information to policy makers and the public (National
Center for Education Statistics, 2012). Several data elements in IPEDS (see
National Center for Education Statistics, 2014) are relevant the committee’s
proposed indicators.

12-Month Enrollment
Data on 12-month enrollment for undergraduate and graduate students
are collected in the fall. The data include unduplicated headcounts and
instructional activity in contact or credit hours. Instructional activity is
used to compute a standardized, 12-month, full-time-equivalent enrollment.

2 See http://nces.ed.gov/collegenavigator [August 2017].


3 See http://nces.ed.gov/ipeds/datacenter [August 2017].

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

134 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Completions
Completion data covering all degrees (associate’s, bachelor’s, master’s
and doctorate) and sub-baccalaureate awards are collected in the fall. These
data are disaggregated by race and ethnicity, gender, and field of study.
They include all STEM degrees and awards received by students, both those
who began at the reporting institution and those who transferred to that
institution.

Graduation Rates
The graduation data cover the initial cohort of full-time, first-time,
degree- and certificate-seeking undergraduate students at 2-year and 4-year
institutions; the number of those students who complete their degrees or
certificates within 150 percent of the normal time (i.e., 3 years or 6 years);
and the number of those students who transferred to other institutions.
Data are reported by race and ethnicity, gender, and field of study. The data
also include 100 percent graduation rates: 4-year bachelor’s degree rates
have been reported since 1997; 2-year certificate and degree rates have been
reported since 2008–2009.
It is important to note that these data do not include part-time students,
students who transfer to the reporting institution and students who transfer
out and later graduate from another institution. Given the high rates of
student “swirl” in STEM fields, these data do not accurately capture STEM
graduation rates.

200 Percent Graduation Rates


In 2009, IPEDS added a new survey component, called Graduation
Rate 200, the graduation rates at 200 percent of normal time. It is collected
in the winter, separately from the graduation rate component so as not to
confuse the two different cohorts that are being reported on. Graduation
rates at 200 percent of normal time are calculated for all full-time, first-time
bachelor degree-seeking students at 4-year institutions and for all full-time,
first-time degree- and certificate-seeking undergraduate students at 2-year
institutions.
Although this survey component reflects the current reality of extended
time to degree, it also excludes part-time students and transfers.

Outcome Measures Survey


To track completion of part-time and transfer students, IPEDS began to
implement a new outcome measures survey in 2015–2016. The new survey

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

EXISTING DATA SOURCES AND MONITORING SYSTEMS 135

is designed to help policy makers track the progress of low-income students


who receive federal aid (Pell grants and Stafford loans), asking institutions
to report separately on students who do and do not receive federal aid. In
a second, broader change, the survey asks institutions to report on four
cohorts of entering students: (1) first-time, full-time students; (2) first-time,
part-time students; (3) transferring full-time students; and (4) transferring
part-time students. For each entering cohort, institutions report the num-
ber and proportion of students who completed their intended credential;
were still enrolled; had enrolled in another institution; or had unknown
whereabouts. All these outcomes are to be reported at 4, 6, and 8 years
after entry. NCES has released preliminary data from this new survey com-
ponent (Ginder et al., 2017) and plans to release final data for 2015–2016
in early 2018.

The Beginning Postsecondary Students Longitudinal Study


In contrast to IPEDS, which focuses on enrollment and completion
(inputs and outcomes), the NCES Beginning Postsecondary Students Lon-
gitudinal Study (BPS) provides more detailed information on students’
progress (process). Unlike IPEDS data, which are collected annually and
released three times each year, BPS data are collected and published less
frequently, about every 3 to 4 years. However, they provide richer, more
detailed data than those in IPEDS. To date, the BPS has followed cohorts
of students who entered postsecondary education in 1990, 1996, 2004, and
2012. In each cycle, BPS tracks a cohort of students as they enter 2-year
or 4-year institutions and collects data on their course-taking, persistence,
completion, transition to employment and demographic characteristics,
among other data elements. Students included in the nationally representa-
tive sample complete three surveys: one at the end of their first academic
year, one at 3 years, and then another at 6 years after they began post­
secondary education.
Data are currently available from the 2004/09 BPS (BPS 04/09). The
BPS 04/09 study followed a sample of more than 18,000 students who
began higher education in academic year 2003−2004 for a total of 6 years,
through 2009, and it merged data from the 2009 Postsecondary Education
Transcript Study to supplement information collected from student and
institutional surveys. NCES designed the BPS survey to sample institutions
and students within institutions. The agency used multiple methods to ob-
tain responses, yielding strong response rates and nationally representative
data. For example, the BPS 04/09 sample comprised all 18,640 students
determined to be eligible during the previous cycle of data collection (BPS
04/06). NCES obtained data from 16,680 respondents, either from the stu-
dent interview or administrative sources, for a response rate of 89 percent

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

136 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

(Wine, Janson, and Wheeless, 2011). The full BPS 04/09 dataset provides
rich information on students’ course histories, enrollment and matricula-
tion pathways, college experiences and perceptions, and retention and
graduation outcomes. Data are disaggregated by race and ethnicity, gender,
socioeconomic status, enrollment status, disability status, field of study, and
institution type. However, the sample sizes do not allow disaggregation by
demographic characteristics, field of study, and institution type.
The current cohort, BPS 12/17, began college in 2012, was followed
up in 2014, and was followed up again in 2017; the data are not yet avail-
able. Continuing these regular cycles will be critical for informing some of
the proposed indicators. In addition, more frequent data collection would
allow the indicators to be updated annually, rather than only once every
3 years.

The National Survey of Postsecondary Faculty


Nationally representative data from 4-year institutions on undergradu-
ate STEM instructors were formerly available from the NCES National
Study of Postsecondary Faculty (Cataldi, Fahimi, and Bradburn, 2005). The
study was based on faculty surveys conducted in 1988, 1993, 1999, and
2004. All four survey cycles included part-time as well as full-time faculty,
and the 1993, 1999, and 2004 surveys included non-faculty personnel with
teaching responsibilities. Topics included sociodemographic characteristics;
academic and professional background; field of instruction; employment
history; current employment status, including rank and tenure; workload;
courses taught; publications; job satisfaction and attitudes; career and re-
tirement plans; and benefits and compensation.
NCES ended this survey following the academic year 2003–2004. The
National Science Foundation (NSF) has expressed interest in working with
NCES to revive the survey and expand it to include evidence-based teach-
ing practices. NSF requested funding for fiscal 2017 to work with NCES to
reinstitute the survey and expand it to provide data on teaching practices,
the evolving role of technology in education, and the changing nature of
faculty work. To date, however, the committee is not aware of any steps
taken to revive the survey.

National Student Loan Data System


The Department of Education office of Federal Student Aid operates
the National Student Loan Data System (NSLDS) as the central database
for monitoring student financial aid. The office uses the database primarily
for operational purposes, such as tracking federal grant and loan disburse-
ments, the enrollment and repayment status of aid recipients, payments and

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

EXISTING DATA SOURCES AND MONITORING SYSTEMS 137

remaining balances on federal loans, and borrower status (Executive Office


of the President of the United States, 2015). NSLDS receives data from
institutions of higher education, agencies that service federal loans, the Pell
grant program, and other federal financial aid programs.4 Relevant to the
committee’s goals and objectives, NSLDS includes students’ enrollment by
academic term and program (including STEM programs).
Because about 70 percent of all graduating students have received Pell
grants or other federal aid, the NSLDS covers the majority of the nation’s
students. In addition, the characteristics of federal aid recipients at an
individual institution are generally similar to those of the overall student
population at that institution, in terms of admissions test scores, race and
ethnicity, age, and marital status (Executive Office of the President of the
United States, 2015). However, students who receive federal aid at an
institution have lower incomes relative to the general population at that
institution. In addition, the percentage of students receiving financial aid
varies across different types of institutions (public and private, for-profit
and nonprofit, 2-year and 4-year).
Although the NSLDS data on students’ enrollment in STEM programs
are not nationally representative of all students and institutions, the data
could provide a rough approximation for the committee’s proposed indica-
tors. Perhaps more importantly, the creation of this database has helped the
Department of Education develop the technical capacity to obtain student
unit record information from institutions and maintain that information in
a central database. The department could potentially apply this technical
capacity and expertise if the current legal ban on creation of a national
student unit record database were overturned. For example, a new national
student unit record system could be created by expanding the NSLDS
(Miller, 2016; see Chapter 7 for further discussion).
The NSLDS has a unique relationship with the National Student Clear-
inghouse (NSC), a proprietary data source, which is described below. Many
institutions voluntarily provide student unit record data to NSC, relying on
it to prepare the enrollment reports they are required to send to NSLDS.
Approximately once a month, these institutions submit to NSC a roster of
detailed administrative information on all their students; NSC then matches
those data to rosters of students sent to it by NSLDS and, based on its
matching, provides reports on behalf of the institutions to NSLDS.

4 As defined in Title IV of the Higher Education Act, such financial aid includes loans under

the Federal Family Education Loan Program or William D. Ford Federal Direct Loan (Direct
Loan) Program, as well as Perkins Loans, Pell Grants, Teacher Education Assistance for Col-
lege and Higher Education Grants, Academic Competitiveness Grants or Science and Math
Access to Retain Talent Grants, and Parent PLUS loans.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

138 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

State Unit Record Data Systems


Partly in response to the federal legislation prohibiting the creation of
a federal unit record data system and partly to address their own informa-
tion needs, many states have constructed their own student unit record
systems. Since 2007, NCES has supported these efforts, providing technical
assistance and awarding federal funds to 47 states through the State Lon-
gitudinal Data System Grant Program (grants were last awarded in 2015).
NCES has encouraged state K–12 education agencies to collaborate with
the respective state higher education and workforce development agencies
to develop linked state datasets on education and employment. State higher
education governing boards often manage these unit record data systems,
using them to respond to questions from state policy makers that are not
easily answered by other datasets. For example, these data systems can
provide state-level information about the effect of policies (e.g., remedial
and developmental education reforms, transfer policies) on student success
(Armstrong and Zaback, 2016).
Although they are well-established, data systems in some states are
challenged by gaps in data coverage, concerns about privacy, and a lack
of resources (Armstrong and Zaback, 2016). Created by state higher edu-
cation systems to provide information on state-supported public higher
education, these systems have limited coverage of private institutions and
thus are not representative of all students and institutions in each state.
A recent survey of 47 state data systems by the State Higher Education
Executive Officers Association (Whitfield and Armstrong, 2016) found
that they all included 4-year public institutions, and most of them (42)
included public 2-year institutions, but only 27 included private for-
profit institutions, and less than one-half (18) collected data from private
nonprofit institutions. Among the states collecting data from private
nonprofit institutions, most reported that they collected data only from
those institutions that participated in state financial aid programs or that
volunteered to provide data.
In some states, policy makers have adopted or are considering
legislation—stemming from concerns about student privacy—that prevents
linking of K–12, postsecondary, and employment databases. Although the
federal Family Educational Rights and Privacy Act (FERPA) provides strict
guidelines for when and how personally identifiable student information
can be shared, such legislation typically prevents agencies from using per-
sonally identifiable information to link datasets.
Some respondents to the recent survey (Whitfield and Armstrong,
2016) noted a lack of funding and an inability to retain quality data ana-
lysts on staff as barriers to effective maintenance and use of these systems
(Armstrong and Zaback, 2016). Federal funding of state data systems has

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

EXISTING DATA SOURCES AND MONITORING SYSTEMS 139

been essential, and not all states have provided state funding to maintain
these systems after federal grants expired.

PROPRIETARY DATA SOURCES


In reviewing private proprietary data sources, the committee considered
the alignment of data elements with the committee’s goals and objectives,
the frequency of data collection and release, their coverage and representa-
tiveness of 2-year and 4-year institutions, and the feasibility of disaggregat-
ing the data.
Private proprietary data sources have three primary weaknesses as po-
tential sources for the committee’s proposed indicator system. First, because
the data are proprietary, federal officials might be required to pay fees and
negotiate intellectual property rights to access and use the data, or they
might not be able to access the data at all. Second, most of the data from
these sources are not disaggregated by discipline, limiting their usefulness
for STEM indicators. Third, the coverage and representativeness of these
data are uneven.

National Student Clearinghouse


The NSC is a private nonprofit organization launched in 1993 to
streamline student loan administration, which now partners with 2-year
and 4-year institutions to track student enrollment and verify educational
achievements. Although institutional participation is voluntary, the orga-
nization states that more than 3,600 institutions enrolling 98 percent of
students in U.S. public and private institutions share enrollment and de-
gree records (National Student Clearinghouse, 2016b). Because institutions
voluntarily submit their enrollment data, the quality of NSC data depends
partly on how each postsecondary institution maintains its data and the
processes used to extract that data for NSC. Responding to two researchers’
open letter about the quality of the data, the National Student Clearing-
house (2016a) posted the following statement on its Website: “The accu-
racy and completeness of the data can realistically be measured only by the
institutions themselves.” In a review of NSC data, Dynarski, Hemelt, and
Hyman (2013) drew on several other national data sources and conducted
a case study of Michigan students enrolled in higher education, concluding
that coverage was highest among public institutions and lowest (but grow-
ing), among for-profit colleges. In addition, they found that enrollment
coverage was lower for minorities that other students but similar for males
and females, and there was substantial variation in coverage across states,
institutional sectors, and over time.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

140 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Institutional variation in reporting to NSC affects the data’s relevance


to undergraduate STEM education. Some institutions do not report stu-
dents’ academic field for every term, limiting their ability to inform indica-
tors of progress or retention specifically in STEM. In addition, the uneven
reporting of all data elements poses a challenge to disaggregation of the
data by demographic characteristics, such as racial and ethnic group, for
the proposed indicators of equity, diversity, and inclusion.

Higher Education Research Institute Surveys


The Higher Education Research Institute (HERI) at the University of
California, Los Angeles, conducts four surveys, on freshman, first-year col-
lege students and college seniors, faculty, and learning environments.

Freshman Survey
The HERI freshman survey gathers data from incoming first-time full-
time college students on their educational attitudes and aspirations. More
than 1,900 four-year institutions have participated in the survey since
1966. In 2015, HERI identified 1,574 institutions that offer baccalaure-
ates degrees that were included in IPEDS and invited them to participate.
This national population of institutions was divided into 26 stratification
groups, based on institutional race, type (e.g., university, 4-year college,
2-year college), control (e.g., public, private nonsectarian, Roman C
­ atholic,
other religious), and selectivity. Of the 1,574 institutions, 308 institu-
tions responded, a 19 percent response rate. Generally, the response rate
among students in the institutions that participated has been high, averag-
ing 75 percent. Since 2011, data from institutions have been included in
the “national norms sample” only if there was a response rate of at least
a 65 percent among incoming full-time first-year students. Data from in-
stitutions just below this cutoff are included if the survey administration
methods showed no systematic biases in freshman class coverage. In 2015,
data from 199 institutions, representing 141,189 student responses, met
these criteria and were included in the national norms sample (Eagan et
al., 2016).
In 2015, the survey data were weighted by a two-step procedure. The
first weight was designed to adjust for response bias within institutions,
and the second weight was designed to compensate for nonresponding
institutions within each stratification group by gender. The weighted data
are nationally representative of first-time, full-time freshmen in nonprofit
4-year colleges and universities in the United States (Eagan et al., 2016;
National Academies of Sciences, Engineering, and Medicine, 2016). Reflect-
ing the high quality of these data, NSF relies on them for the undergradu-

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

EXISTING DATA SOURCES AND MONITORING SYSTEMS 141

ate education section of the National Science Board’s biennial Science and
Engineering Indicators report (National Science Foundation, 2016).
Because it does not include 2-year institutions and part-time students are
undersampled, the HERI Freshman Survey does not provide data nationally
representative of the U.S. population of 2-year and 4-year students. In terms
of disaggregation, although the data include the proportion of entering stu-
dents who stated an intention to major in a STEM field and later completed
a degree in that field, they do not measure students’ actual selections of
major field (e.g., switching into STEM majors). The data are disaggregated
by demographic characteristics.

First-Year College/College Senior Survey


The HERI first-year college/college senior survey primarily includes
private 4-year institutions, with no sampling of 2-year institutions. Within
participating institutions, student response rates range from 25 to 75 per-
cent. The resulting data are not nationally representative of the universe of
2-year and 4-year institutions and students. Although the data are disaggre-
gated by demographic characteristics, increasing their relevance for equity
indicators, they do not allow disaggregation by institutional type, and, as
noted above, do not cover 2-year institutions.

Faculty Survey
With the suspension of the NCES Survey of Postsecondary Faculty in
2004 (see above), researchers and policy makers have increasingly relied
on the HERI Faculty Survey. Although this survey is designed to include
full- and part-time faculty members, most participating institutions choose
to sample only full-time faculty. In addition, although both 2-year and
4-year institutions are invited to participate, 4-year nonprofit institutions
predominate in the survey. The survey includes questions about working
conditions and activities and teaching approaches.
In 2014, HERI identified a national population of 1,505 institutions
that grant baccalaureate degrees that had responded to the IPEDS 2012–
2013 human resources survey and invited them to participate in the HERI
Faculty Survey. The national population was divided into 20 stratifica-
tion groups based on type, control, and selectivity. Of those invited, 148
institutions participated, a 9 percent response rate. HERI also developed
a supplemental sample of 67 institutions to enhance the number of respon-
dents from types of institutions that participated at a lower rate than others
to create a normative national sample of institutions (Eagan et al., 2014b).
To be included in the normative national sample, colleges were required
to have responses from at least 35 percent of full-time undergraduate

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

142 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

faculty, and universities were required to have responses from at least 20


percent of full-time undergraduate faculty. In 2014, data from 133 par-
ticipating institutions and 63 supplemental sample institutions met these
criteria and were included in the normative national sample. Among these
institutions, faculty response rates have averaged 40 to 50 percent. In
2015, the sample data were weighted using a three-step procedure. The
first weight was designed to adjust for response bias either within the par-
ticipating institutions or the supplemental sample. The second weight was
designed to correct for between-stratification cell differences in institutional
participation. The third weight was the product of the first and second
weights. Weighting each response in the norms sample brought the counts
of full-time undergraduate faculty up to the national population number
within each stratification cell, so that the data are representative of the na-
tional population of full-time undergraduate faculty at 4-year institutions
(Eagan et al., 2014b).
Because this survey provides data on faculty members’ discipline and
teaching practices, it has the potential to provide data on evidence-based
STEM educational practices to monitor progress toward Goal #1. However,
the teaching practices it includes do not necessarily reflect the evidence-
based STEM educational practices identified in the relevant research. In
addition, it is based primarily on full-time faculty at nonprofit 4-year in-
stitutions and thus is not nationally representative of all types of faculty at
both 2-year and 4-year institutions.

Diverse Learning Environments Survey


Students’ perceptions of their learning environments become their lived
realities, and data for indicators of equity, diversity, and inclusion would
likely need to be collected from surveys of STEM students and faculty. The
HERI Diverse Learning Environments Survey includes a validated sense
of belonging measure based on four items (I feel a sense of belonging to
this college; I see myself as a part of the campus community; I feel I am a
member of this college; If asked, I would recommend this college to others).
The same instrument asks students the extent to which they agree that “fac-
ulty members are approachable” in their academic program and that the
respondent has “a peer support network among students” in their major.
This survey does not provide nationally representative data related
to students’ perceptions of equity, diversity, and inclusion. Only about 30
institutions have administered the survey in each of the past 2 years, and
response rates have been low, averaging 25 percent of students at the par-
ticipating institutions.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

EXISTING DATA SOURCES AND MONITORING SYSTEMS 143

National Survey of Student Engagement


In 1998, the National Center for Higher Education Management Sys-
tems launched the development of a new survey focused on student engage-
ment, college outcomes, and institutional quality. The survey designers
drew on research and theory linking student engagement with persistence
and success. That survey is now the National Survey of Student Engage-
ment (NSSE) and is administered by the Center for Postsecondary Research
at Indiana University. It asks students about their own learning activities,
instructors’ behavior, and their perceptions of the college experience, in-
cluding self-reported learning gains in areas, such as acquiring job-related
knowledge and skills, writing clearly and effectively, and contributing to the
welfare of their community. NSSE also includes questions about students’
engagement in “high-impact practices” (see Chapter 3), which overlap to
some degree with evidence-based STEM educational practices, and the data
are broken down by race and ethnicity and gender.
Since 2000, more than 1,600 colleges and universities have adminis-
tered the survey to first-year and fourth-year students (Center for Postsec-
ondary Research, 2017b). Although coverage of different types of 4-year
institutions is good, the survey does not cover 2-year institutions, so the
data that are not nationally representative. According to the NSSE Website,
in 2016, student response rates within participating institutions ranged
from 5 to 77 percent, with an average of 29 percent.5 In a recent study,
Fosnacht and colleagues (2017) used data from NSSE administrations be-
tween 2010 and 2012 to simulate the effects of low response rates and low
respondent counts. They found institution-level estimates for several mea-
sures of college student engagement to be reliable under low response rate
conditions (ranging from 5% to 25%) and as few as 25 to 75 respondents,
based on a conservative reliability criterion (r ≥ .90), albeit with greater
sampling error and less ability to detect statistically significant differences
with comparison institutions.
Some authors have raised questions about the validity of NSSE data.
For example, Porter (2013) examined students’ self-reported learning gains
by developing and testing a theory of college student survey response. He
found little evidence of the construct and criterion validity of self-reported
learning gains. Campbell and Cabrera (2014) analyzed NSSE data from
student responses regarding their participation in three “deep approaches
to learning” at a single large public research university. Using confirmatory
factor analyses and structural equation modeling, the authors found that
the three scales were internally consistent, but participation in deep learning
was not related to students’ cumulative grade point averages.

5 See http://nsse.indiana.edu [August 2017].

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

144 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Community College Survey of Student Engagement


The Community College Survey of Student Engagement (CCSSE), de-
veloped in partnership with NSEE, was established in 2001 as a project
of the Community College Leadership Program at the University of Texas
at Austin. Students are asked about their engagement in five groups of
benchmark practices thought to enhance learning: active and collaborative
learning, student effort, academic challenge, student-faculty interaction,
and support for learners. This survey has moderate coverage of 2-year insti-
tutions, but the data are not nationally representative. The questions about
the validity of student self-reported learning gains noted above (Porter,
2013) also apply to the data from this survey.

Faculty Survey of Student Engagement


The Center for Postsecondary Research at Indiana University conducts
the Faculty Survey of Student Engagement (FSSE) as a complement to
NSSE. The web-based survey, which is designed for all instructors (faculty
members, other instructors, graduate student instructors), asks questions
about instructors’ perceptions:

• how often students engage in different activities;


• the importance of various areas of learning and development;
• the nature and frequency of their interactions with students; and
• how they organize their time, in and outside the classroom.

According to the Center for Postsecondary Research (2017a), more


than 250,000 instructors from more than 800 institutions have responded
to FSSE since 2003. However, only about 20–25 percent of faculty members
at the participating institutions have responded and few 2-year institutions
participate. The resulting data are not nationally representative.

MONITORING SYSTEMS
The committee was not able to locate any existing systems that are
designed specifically for monitoring the status and quality of undergradu-
ate STEM education. However, it did identify existing monitoring systems
that include elements relevant to undergraduate STEM, as discussed below.

Science and Engineering Indicators


Science and Engineering Indicators (SEI) is a congressionally mandated
biennial report on U.S. and international science and engineering prepared

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

EXISTING DATA SOURCES AND MONITORING SYSTEMS 145

by NSF’s National Center for Science and Engineering Statistics (NCSES)


under the guidance of the National Science Board (National Science Foun-
dation, 2016; Khan, 2016). The report presents indicators, defined as
“quantitative representations that might reasonably be thought to provide
summary information bearing on the scope, quality, and vitality of the
science and engineering enterprise” (National Science Foundation, 2016,
p. F-2). The indicators are designed to enhance understanding of the current
environment and to inform policy development.
Chapter 2 of SEI presents indicators on human capital, including STEM
education at the K–12, undergraduate, and graduate levels, along with sta-
tistics on STEM graduates who are in the workforce. These indicators aim
to inform stakeholders about inputs, processes, outputs, and outcomes of
the STEM education system. Key indicators for undergraduate education
include: enrollment by type of institution, field, and demographic character-
istics; intentions to major in STEM fields; and recent trends in the number
of earned STEM degrees.

Enrollment 
The levels and flows of enrollment in STEM show how the different
STEM fields are changing over time and so can inform decision makers
charged with directing resources to undergraduate education. For post-
secondary education, the enrollment data include the number enrolled
in STEM relative to other degrees; change over time in the number of
undergraduate degrees conferred; demographic characteristics of students
enrolled in STEM fields, including citizenship status; and the number of stu-
dents enrolled in 2-year institutions by demographic characteristics. These
statistics are tabulated from the IPEDS fall enrollment survey.

Intentions and Attrition


In response to policy makers’ interest in retaining students in STEM
to ensure an adequate supply of STEM professionals, SEI reports inten-
tions of students to major in STEM fields by ethnicity, race, and gender.
Since 1971, the data source for this indicator has been the HERI Freshman
Survey (described above). SEI also presents statistics on attrition in STEM
fields, mainly citing studies by Eagan and colleagues (2014a) and Chen and
Soldner (2013).

Earned STEM Degrees


The health of a field of study is often represented by growth rates and
other statistics that show dynamics of the system. The most recent SEI

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

146 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

(National Science Foundation, 2016) presents the number and growth rates
of associate and baccalaureate degrees awarded in STEM, by demographic
characteristics, drawing on the IPEDS completion survey.

Data Gaps
A recent review of SEI (National Research Council, 2014) noted that,
although it provides a bevy of statistics on 4-year and postgraduate enroll-
ments and degrees, it needs improved information on 2-year students who
later earn higher degrees in STEM. Noting an increase in students who at-
tend 2-year institutions as part of their 4-year STEM education, the review
recommended that NCSES track graduates of 2-year institutions in STEM
fields and publish data on these students’ persistence at different levels of
education (National Research Council, 2014, p. 18).

Proprietary Monitoring Systems


In response to growing accountability pressures in higher education,
many multi-institution consortia have been formed to promote reform and
improvement. These groups often emphasize the collection and analysis of
data on student progress as a way to inform internal improvement efforts
and create consortium-wide benchmarks of progress. Building on the work
of these consortia, other groups have emerged to focus specifically on ap-
proaches to data collection and analysis and development of new measures
of higher education quality. Examples of these groups include Access to
Success, Achieving the Dream, Completion by Design, Complete College
America, National Community College Benchmarking Project, and Volun-
tary Institutional Metrics Project.
These groups often gather data from institutions in the form of specific
measures of student progress and outcomes. Like the indicators proposed
by the committee, these measures compile data on key aspects of higher
education quality into an easily understandable form that educational
policy makers and practitioners can use to monitor quality over time. For
example, Complete College America (2014) and the National Governors’
Association developed measures for state higher education systems to use in
voluntary data gathering: see Box 6-2. The Voluntary Institutional Metrics
Project expanded this work by using similar measures to gather data on
public, private nonprofit, and private for-profit institutions, including data
on students who enroll anytime during the academic year, not only in the
fall (HCM Strategists, 2013).
Measures similar to these have been used by groups such as Complete
College America and Achieving the Dream to gather institutional survey
data and student unit record data from many public 2-year and 4-year

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

EXISTING DATA SOURCES AND MONITORING SYSTEMS 147

BOX 6-2
Example Measure of Completion of Foundational Courses

In its common college completion metrics technical guide, Complete College


America (2014) invites states with unit-record systems to use those systems to
construct measures for reporting progress, and it invites states that lack such
systems (or with incomplete systems) to request the data from individual insti-
tutions in a way that would allow aggregation at the state level. The technical
guide provides specific measures to monitor various dimensions of 2- and 4-year
undergraduate education. One of these measures overlaps to some degree with
the committee’s Indicator 3.1.1: Completion of foundational courses including
developmental education courses to ensure STEM program readiness.

Progress Metric 3: Success in Gateway (first-year) College Courses


Purpose: To determine the proportion of undergraduate students completing entry
college-level mathematics courses, English courses, and both mathematics and
English courses within the first 2 academic years at public institutions of higher
education.
Definition: Annual number and percentage of entering first-time degree or
­certificate-seeking undergraduate students who complete entry college-level math
and English courses within the first 2 consecutive academic years; by institu-
tion type (2-year; 4-year research, very high activity; all other 4-year), race and
­ethnicity, gender, age groups, Pell status (at time of entry), and remedial status
(at time of entry).
Numerator(s):
A. Number of students from cohort (denominator) who complete at least one
entry college-level (nonremedial or developmental course) math course
but not an entry-level English course within the first 2 consecutive aca-
demic years. OR
B. Number of students from cohort (denominator) who complete at least one
entry college-level (nonremedial or developmental course) English course
but not an entry-level math course within the first 2 consecutive academic
years. OR
C. Number of students from cohort (denominator) who complete at least one
entry college-level (nonremedial or developmental course) English course
and at least one entry-level math course within the first 2 consecutive
academic years.
Denominator: For each of the above numerators, the number of first-time degree
or certificate-seeking undergraduate students enrolling in the fall semester of a
specified year.

SOURCE: Complete College America (2014, p. 14).

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

148 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

institutions. The institutions sampled are not nationally representative of


the universe of public and private 2-year and 4-year institutions. And be-
cause the resulting data are proprietary, they might not be available for the
committee’s proposed indicator system. However, because the content of
some measures used by higher education reform consortia overlaps with the
content of some of the proposed indicators, they could be incorporated in
expanded IPEDS institutional surveys (see Chapter 7).

DATA FOR EACH INDICATOR


Drawing on the above review of data sources and monitoring systems,
the committee identified potential data sources and research needs for each
of the 21 indicators it proposes. For each indicator, the committee identified
potential data sources and considered how those sources would need to be
revised to support that indicator, such as by revising the data content to
align with the content of the indicator or by expanding a survey to ensure
national coverage of all groups of students and institutions. For some indi-
cators, the committee determined that research would be needed to more
clearly define them and develop the best measurement approaches, prior to
data collection. Through this process, the committee found that the avail-
ability of data was limited. For some indicators, nationally representative
datasets are available, but when these data are disaggregated, first to focus
on STEM students and then to focus on specific groups of STEM students,
the sample sizes become too small for statistical significance. For other
indicators, no data are available from either public or proprietary sources.
The committee’s analysis is presented below and summarized in Table 6-2.

Indicator 1.1.1:
Use of Evidence-Based STEM Educational Practices
in Course Development and Delivery

Data Available and Potentially Available


Currently, few data are available on the extent to which educators
(namely, faculty members, graduate student instructors, adjunct instructors,
or others) throughout the nation use evidence-based practices in course de-
velopment and delivery. The limited data currently available comes primarily
from self-report surveys, often targeted to instructors who participated in
professional development programs (National Research Council, 2012, Ch. 8;
Manduca et al., 2017). Because those who participate in professional develop-
ment may be more motivated than other instructors to learn about and adopt
evidence-based practices, the survey responses are unlikely to be representa-
tive of the teaching practices among all instructors in the discipline, nationally.

Copyright National Academy of Sciences. All rights reserved.



TABLE 6-2  Research Needs and Potential Data Sources for the Proposed Indicators
Needs for Research and Modification
of Data Collection Instruments and/
Indicator Sourcea or Systems Needs for Coverage Improvements
1.1.1 Use of Evidence-Based HERI Faculty Survey Research to more clearly define Include 2-year institutions; more
Educational Practices in Course evidence-based STEM educational systematic inclusion of graduate
Development and Delivery practices; develop and add items teaching assistants
NSOPF Research to more clearly define Expand number of institutions
evidence-based STEM educational sampled to allow more granular
practices; renew survey and develop disaggregation
and add items
Faculty Survey of Student Research to more clearly define Include more 2-year institutions
Engagement evidence-based STEM educational and increase response rates at
Indicators for Monitoring Undergraduate STEM Education

practices; develop and add items participating institutions


1.1.2 Use of Evidence-Based None Research to more clearly define
STEM Educational Practices evidence-based STEM educational
Outside the Classroom practices outside the classroom
1.2.1 Extent of Instructors’ HERI Faculty Survey None Include 2-year institutions; more
Involvement in Professional systematic inclusion of graduate
Development student instructors
NSOPF Renew faculty survey and review Expand number of institutions
professional development items from sampled to allow more granular
the 1988 department chairperson disaggregation
survey for possible inclusion
1.2.2 Availability of Support or None Research to identify and clearly define

Copyright National Academy of Sciences. All rights reserved.


Incentives for Evidence-Based key supports or incentives
Course Development or Course
Redesign continued
149
TABLE 6-2 Continued
150

Needs for Research and Modification


of Data Collection Instruments and/
Indicator Sourcea or Systems Needs for Coverage Improvements
1.3.1 Use of Valid Measures of None Research to identify valid measures of
Teaching Effectiveness teaching effectiveness
1.3.2 Consideration of None Research on how to measure
Evidence-Based Teaching consideration of evidence-based
in Personnel Decisions by teaching
Departments and Institutions
2.1.1 Institutional Structures, None Research to identify and define key
Policies, and Practices That structures, policies, and practices;
Strengthen STEM Readiness for components could be added to IPEDS
Indicators for Monitoring Undergraduate STEM Education

Entering and Enrolled Students on the basis of this research


2.1.2 Entrance to and BPS None Expand number of institutions and
Persistence in STEM students sampled to allow more
Educational Programs granular disaggregation
HERI Freshman Survey and None Incorporate 2-year institutions in
NSC Freshman Survey; increase coverage
of students’ academic programs in
NSC data provided by institutions
2.1.3 Student Participation HERI First College Year/ Research to more clearly define Include public and 2-year
in Evidence-Based STEM Senior Survey evidence-based educational practices; institutions and universities
Educational Practices develop and add items
NSSE Same as above Expand sample of 4-year

Copyright National Academy of Sciences. All rights reserved.


institutions
CCSSE Same as above Increase coverage of 2-year
institutions

2.2.1 Diversity of STEM Degree IPEDS Include items on students’ Pell None
and Certificate Earners in (socioeconomic) status and disability
Comparison with Diversity of status in data provided by institutions
Degree and Certificate Earners
in All Fields
2.2.2. Diversity of Transfers NSC Add student attributes (gender, race More comprehensive participation
from 2- to 4-year STEM and ethnicity, Pell status, disability among and coverage of all types of
Programs in Comparison with status) to the data voluntarily postsecondary institutions
Diversity of Students in 2-year submitted by institutions to NSC
STEM Programs
2.2.3. Time to Degree for NSC Same as above Increase coverage of students’
Students in STEM Academic academic programs in the
Programs data voluntarily submitted by
institutions to NSC
Indicators for Monitoring Undergraduate STEM Education

2.3.1. Diversity of STEM IPEDS Add departmental discipline None


Faculty Members in
HERI Faculty Survey None Include 2-year institutions
Comparison with Diversity of
STEM Graduate Degree Holders NSF Survey of Doctoral Add departmental discipline for Misses those with faculty
Recipients faculty appointments who lack doctorates
2.3.2. Diversity of STEM IPEDS Add departmental discipline None
Graduate Student Instructors in
Comparison with Diversity of HERI Faculty Survey None Include 2-year institutions; more
STEM Graduate Students systematic inclusion of graduate
teaching assistants
2.4.1. Students Pursuing STEM HERI Diverse Learning None Expand coverage of 2-year and
Credentials Feel Included and Environments Survey 4-year institutions

Copyright National Academy of Sciences. All rights reserved.


Supported in their Academic
Programs and Departments
continued
151
TABLE 6-2 Continued
152

Needs for Research and Modification


of Data Collection Instruments and/
Indicator Sourcea or Systems Needs for Coverage Improvements
2.4.2. Faculty Teaching Courses HERI Faculty Survey with None Include 2-year institutions; more
in STEM Disciplines Feel Campus Climate Module systematic inclusion of graduate
Supported and Included in Their teaching assistants
Departments
2.4.3. Institutional Practices None
Are Culturally Responsive,
Inclusive, and Consistent across
the Institution
3.1.1 Completion of BPS 04/09 Research to more clearly define Expand number of institutions and
Indicators for Monitoring Undergraduate STEM Education

Foundational Courses, including developmental and foundational students sampled to allow more
Developmental Education courses granular disaggregation
Courses, to Ensure STEM
Program Readiness

3.2.1 Retention in STEM BPS 04/09 None Expand number of institutions and
Programs, Course to Course students sampled to allow more
and Year to Year granular disaggregation
HERI Freshman Survey and None Incorporate 2-year institutions in
NSC Freshman Survey; increase coverage
of students’ academic programs in
NSC data provided by institutions
3.2.2 Transfers from 2-year NSC None Increase coverage of students’

Copyright National Academy of Sciences. All rights reserved.


to 4-year STEM Programs in academic programs in data
Comparison with Transfers to provided to NSC by institutions
All 4-year Programs

3.3.1 Number of Students IPEDS Include items on students’ Pell None


Who Attain STEM Credentials (socioeconomic) status and disability
over Time (disaggregated by status in data provided by institutions
Institution Type, Transfer
Status, and Demographic
Characteristics)
NOTES: BPS, Beginning Postsecondary Students Longitudinal Study; CCSSE, Community College Survey of Student Engagement; HERI, Higher
Education Research Institute; IPEDS, Integrated Postsecondary Educational Data System; NSC, National Student Clearinghouse; NSOPF, National
Survey of Postsecondary Faculty; NSSE, National Survey of Student Engagement.
aRefer to Table 6-1.
Indicators for Monitoring Undergraduate STEM Education

Copyright National Academy of Sciences. All rights reserved.


153
Indicators for Monitoring Undergraduate STEM Education

154 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

More generally, the validity of data from any self-report survey can be
threatened by faking, cheating, and motivation. Respondents may be mo-
tivated by social desirability (the tendency to think of and present oneself
in a favorable light) and therefore not respond accurately. For example,
Manduca and colleagues (2017) recently found that instructors reported
frequently using evidence-based teaching practices on self-report surveys,
but observational methods (discussed below) indicated that these instruc-
tors rarely did so. If self-report surveys are used for high-stakes purposes
(e.g., to inform decisions about promotion and tenure), it can provide ad-
ditional incentives to tailor one’s responses to present oneself in the best
possible light (Sackett, 2012).
Measuring teaching is difficult, and different measurement methods
(e.g., self-report surveys, interviews, observations) have varying strengths,
weaknesses, and costs (William T. Grant Foundation, Spencer Foundation,
and Bill & Melinda Gates Foundation, 2014). Observational methods,
such as the Reformed Teaching Observational Protocol (Piburn and Daiyo,
2000) and the more recent Classroom Observation Protocol for Under-
graduate STEM (Smith et al., 2013) require trained experts who analyze
either videotapes or actual instruction using protocols that describe vari-
ous teaching practices. These methods provide high-quality data, but they
are time-consuming and expensive to implement even for small groups of
instructors. For practical reasons, and to capture information on teaching
practices among larger samples of instructors, development of self-report
surveys is continuing (e.g., Wieman and Gilbert, 2014). For these same
practical reasons, self-report surveys of instructors would be the most likely
source of national data for indicators to monitor the use of evidence-based
practices in and outside the classroom.
Since 2004, when NCES last administered the National Survey of
Postsecondary Faculty (NSOPF), the HERI has conducted the only com-
prehensive, nationally representative survey of faculty, and this survey
covers only faculty at 4-year colleges and universities. The HERI Faculty
Survey includes questions about instructors’ activities related to research,
teaching, and service, as well as their perceptions of students, campus ad-
ministration, and workplace stressors. It also invites respondents to report
on their participation in a range of professional development opportuni-
ties. The resulting (weighted) dataset represents the national population of
full-time faculty with responsibility for teaching undergraduates at 4-year
colleges and universities. However, the HERI Faculty Survey data have four
significant limitations as indicators of use of evidence-based educational
practices. First, the data are self-reports, which as noted above sometimes
do not correspond with more direct measures of instruction, such as ob-
servational protocols. Second, the HERI data are collected primarily from
full-time instructors at 4-year institutions, missing the growing numbers

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

EXISTING DATA SOURCES AND MONITORING SYSTEMS 155

of part-time instructors, as well as the large population of full-time and


part-time instructors at 2-year colleges. This gap is significant given the
committee’s charge to focus especially on indicators of the first 2 years of
undergraduate education. Third, it is not clear which instructional prac-
tices mentioned in the HERI Faculty Survey items are evidence based, al-
though some survey items refer to instructional practices that are commonly
cited in discipline-based educational research, such as cooperative learning,
group projects, and undergraduate research experiences (National Research
Council, 2012). Finally, reports of instructional behaviors—even those that
are evidence based (e.g., using classroom response systems and technology
to provide students with fast feedback)—do not include information about
whether the instructional behavior or approach was carried out appropri-
ately and effectively.
The FSSE (described above) also collects data on teaching practices.
Like the HERI Faculty Survey, data are collected primarily from full-time
instructors teaching at 4-year colleges and universities. Thus, neither FSSE
nor the HERI Faculty Survey provides sufficient information for this in-
dicator from full-time community college instructors, as well as part-time
instructors at 2- and 4-year colleges and universities.
NSF requested funding for fiscal 2017 to re-institute the National
Survey of Postsecondary Faculty in partnership with NCES (National Sci-
ence Foundation, 2016, p. 52). Specifically, NSF planned to participate in
revising the survey to gather data on teaching practices, the evolving role
of technology in education, and the rapidly changing nature of faculty
work which can inform approaches to professional development. Such an
endeavor would overcome the challenges of representation shortcomings
found in the HERI Faculty Survey and the FSSE, but the committee can
offer no evaluation as to whether the content of the yet-to-be-developed
instrument would include items that sufficiently address the data needed to
measure Indicator 1.1.1.
Evidence on students’ exposure to evidence-based instructional practices
in mathematics may soon be available through the work of the National
Science and Technology Council Interagency Working Group on STEM
Education. In a quarterly progress report from the council, H ­ andelsman
and Ferrini-Mundy (2016) indicated that the interagency working group
was on track to add an item on undergraduate mathematics instruction
to the second follow-up of the NCES High School Longitudinal Survey of
2009. Since that time, the survey data have been collected, including data
on undergraduate mathematics instruction. NCES is currently preparing the
data files and expects to release the new data in early 2018.6

6 See https://nces.ed.gov/surveys/hsls09 [November 2017].

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

156 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Data and Research Needs


The first step in developing indicators of instructors’ use of evidence-
based practices in course development and delivery is additional research
to develop common definitions of evidence-based course development and
delivery and to identify its specific characteristics and components. That re-
search could lead to new elements for inclusion in existing federal surveys.
Alternatively, the federal government could contract with one or more of
the existing survey organizations to include new elements in their existing
surveys and administer the revised surveys to nationally representative
samples of instructors at 2-year and 4-year institutions.

Indicator 1.1.2:
Use of Evidence-Based Practices Outside the Classroom

Data Available and Potentially Available


No nationally representative data are currently available on the extent
to which students in 2-year and 4-year institutions participate in evidence-
based educational practices outside the classroom. There are research and
measurement challenges to defining such evidence-based practices out-
side the classroom and in documenting the extent to which they are used
(­National Academies of Sciences, Engineering, and Medicine, 2017). These
measurement challenges are very similar to those for Indicator 1.1.1.

Data and Research Needs


Further research is needed to clearly define a specific set of evidence-
based practices outside the classroom that show evidence of effectiveness
for improving mastery of STEM concepts and skills and persistence in
undergraduate STEM courses. A possible starting point for this research
would be to review the recent report on undergraduate research experiences
(National Academies of Sciences, Engineering, and Medicine, 2017) and
Estrada’s (2014) review and synthesis of research and evaluation studies on
co-curricular programs. This research would provide a foundation for de-
veloping new survey questions that could potentially be included in the re-
design of the National Survey of Postsecondary Faculty. However, because
advising, mentoring, summer bridge programs, and other experiences are
often provided by student support staff or others, the survey sample might
have to be broadened to include such non-faculty groups. As an alternative,
new survey questions could be included in the HERI Faculty Survey, but it,
too, would have to be broadened to include student support staff, and all
types of instructors at both 2-year and 4-year institutions.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

EXISTING DATA SOURCES AND MONITORING SYSTEMS 157

Indicator 1.2.1:
Extent of Instructors’ Involvement in Professional Development

Data Available and Potentially Available


National-level data on the use of professional development by instruc-
tors are not currently available. Surveys of instructors would be the most
likely source to fill this gap. Complementary surveys of academic deans,
department chairs, and central administrators would help to develop a
picture of what kinds of professional development programs are offered
at an institution. (The limitation of current surveys of faculty and other
instructors are discussed above, under Indicator 1.1.1.)

Data and Research Needs


Data and research needs in the area of faculty surveys are discussed
above, under indicator 3.1.1.

Indicator 1.2.2:
Availability of Support or Incentives for Evidence-
Based Course Development or Course Redesign

Data Available and Potentially Available


There are no national data currently available on the extent of support
available to faculty for evidence-based course development or redesign.
Several dimensions of support have the potential to be measured in future
surveys. For example, the Partnership for Undergraduate Life Sciences
Education (PULSE) rubrics developed to measure change in life sciences
departments include rubrics to measure departmental support for course
redesign by instructors at 2-year and 4-year institutions (Brancaccio-Taras
et al., 2016).7 Future surveys might use similar rubrics to provide a measure
of the extent of support at institutional levels. This approach would need to
consider potential variations in score in different STEM areas in an institu-
tion and, ideally, it would be weighted by instructional faculty numbers.

Data and Research Needs


Research is needed to determine which dimensions of support for
instructors that have been identified in previous research are most critical
to successful evidence-based course development and redesign and what

7 See http://www.lifescied.org/content/12/4/579.full [July 2017].

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

158 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

additional dimensions may be needed. For example, the PULSE rubrics ad-
dress several dimensions of support, including support for teaching/learning
needs in STEM and faculty mentoring for the teaching role (PULSE Fellows,
2016). Such research is the first step toward developing new survey ques-
tions that might be included in existing surveys of institutions (e.g., IPEDS)
or in a revived National Survey of Postsecondary Faculty.

Indicator 1.3.1:
Use of Valid Measures of Teaching Effectiveness

Data Available and Potentially Available


The committee was not able to find any existing national data on
the extent to which 2-year and 4-year institutions use valid measures of
teaching effectiveness to measure instructors’ performance. New instru-
ments have been developed to measure instructors’ teaching practices (e.g.,
­Wieman and Gilbert, 2014; Walter et al., 2016), but to date they have been
administered primarily to small groups of instructors for the purpose of
measuring the instruments’ reliability and validity. Drinkwater, Matthews,
and Seiler (2017) report on one larger-scale use: The authors administered
the Teaching Practices Inventory (Wieman and Gilbert, 2014) to measure
the use of evidence-based teaching approaches in 129 courses across 13
departments and compared the results with those from a Canadian institu-
tion to identify areas in need of improvement.

Data and Research Needs


Research is needed to identify instruments designed to measure teach-
ing practices in the classroom, determine which ones are valid and reliable,
and consider whether and how the detailed questions they contain might
be translated for inclusion in large-scale national surveys. A useful start-
ing point would be a description of 12 strategies for measuring teaching
effectiveness in Berk (2005): student ratings, peer ratings, self-evaluation,
videos, student interviews, alumni ratings, employer ratings, administra-
tor ratings, teaching scholarship, teaching awards, learning outcomes, and
teaching portfolio. Researchers will need to take care when evaluating the
various assessment instruments that have been developed using each of
these strategies; the data resulting from the different instruments can be
interpreted and used in very different ways. For example, typical student
evaluations ask students what they liked about a particular aspect of a class,
but better information is gained by evaluation instruments that ask students
what they learned from particular aspects of a class (Seymour et al., 2000).

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

EXISTING DATA SOURCES AND MONITORING SYSTEMS 159

Indicator 1.3.2:
Consideration of Evidence-Based Teaching in Personnel
Decisions by Departments and Institutions

Available and Potentially Available Data


No national data are currently available on the extent to which de-
partments or institutions explicitly consider use of evidence-based teaching
practices when making personnel decisions.

Data and Research Needs


Further research is needed to identify and analyze potential sources
of data for this proposed indicator. Possible data sources could be based
on (1) an evaluation of institutional policies (e.g., institutions could be
asked to submit policy statements that could be analyzed with a rubric);
(2) self-reported data from institutions about their hiring and promotion
practices (e.g., institutional leaders could be asked whether departmental or
institutional policy statements explicitly consider evidence-based teaching
in personnel decisions); and (3) the perceptions of administrators, faculty,
and others about organizational features (e.g., instructors could be asked
about their perceptions about the extent to which their institution explicitly
considers evidence-based teaching in personnel decisions). For (1) and (2),
it is important to note that official policies can often differ substantially
from actual practices.

Indicator 2.1.1:
Institutional Structures, Policies, and Practices That Strengthen
STEM Readiness for Entering and Enrolled College Students

Data Available and Potentially Available


Currently, no existing data source provides information about the
prevalence of programs and practices that can strengthen STEM readiness
for U.S. college students. NCES does collect data on institutions’ develop-
mental education offerings (e.g., presence, broad academic area, number of
students enrolled) through IPEDS and on student enrollment in such offer-
ings through occasional longitudinal surveys such as BPS. However, there
is no survey that measures institutional practices and programs beyond
developmental education that bolster students’ STEM readiness.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

160 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Data and Research Needs


Research is needed to more clearly define the specific types of programs
and practices that support diverse students’ entrance to and progression
through STEM degree programs. Based on this research and conceptualiza-
tion, counts of the presence of such programs (e.g., use of multiple measures
to determine appropriate mathematics and writing placement, availability
of supplemental instruction for introductory STEM courses) could poten-
tially be included as new elements in IPEDS.

Indicator 2.1.2:
Entrance to and Persistence in STEM Academic Programs

Data Available and Potentially Available


Currently, only limited national data are available for this indicator.
Longitudinal surveys developed and administered by the NCES (e.g., BPS
and Baccalaureate and Beyond) are the primary means of tracking student
persistence in STEM and other specific academic programs. These in-depth
surveys provide academic data submitted by institutions, which may be
more reliable than students’ self-reported major fields. For example, several
of the HERI surveys—Freshmen Survey, Your First College Year Survey, and
College Senior Survey—allow researchers to examine STEM persistence us-
ing matched samples. However, although the data indicate whether students
who stated an intention to major in a STEM field completed a degree in that
field, they do not provide a measure of actual student major field selection
between college entrance and graduation, such as switching into STEM
majors. In addition, these data are self-reported by student respondents,
which are less reliable than data provided directly by institutions.

Data and Research Needs


The nationally representative longitudinal surveys administered by
NCES face challenges with respect to frequency of data collection and
coverage of data. Without more frequent data collection for new cohorts,
policy makers may wait years for an updated indicator of the status of
diverse students’ entrance to and persistence in STEM academic programs.
Connecting the HERI Freshman Survey with data from the NSC on term-
to-term enrollment and completion data supplemented with academic ma-
jor or field of study would provide substantial flexibility in understanding
the STEM persistence and completion patterns of diverse students. How-
ever, the data would be limited to students who attend nonprofit 4-year
colleges and universities.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

EXISTING DATA SOURCES AND MONITORING SYSTEMS 161

HERI’s Freshman Survey collects a wealth of demographic information


on entering first-year students at 4-year institutions, which enables greater
flexibility in how data are disaggregated across student characteristics. The
survey also provides respondents with the opportunity to indicate their in-
tended major, though such self-reported data may provide some challenges
in terms of its reliability. The NSC data offer the possibility of examining
points of departure from STEM fields measured across academic terms or
years of study. However, as discussed in Chapter 3, HERI’s Freshman Sur-
vey does not include 2-year institutions, and part-time students attending
4-year institutions are often undersampled. In addition, neither HERI nor
NSC provides full coverage of all 4-year institutions. Finally, data from
both HERI and NSC are proprietary, which may delay or impede federal
agencies from obtaining efficient and timely access to the data necessary
for this indicator.

Indicator 2.1.3:
Equitable Student Participation in Evidence-Based
STEM Educational Programs and Experiences

Data Available and Potentially Available


Currently, no nationally representative data are available on students’
participation in evidence-based STEM educational programs and experi-
ences (National Academies of Sciences, Engineering, and Medicine, 2017).
Several national surveys collect data on students’ participation in “high-
impact practices” (see Chapter 3), including the National Survey of Stu-
dent Engagement, the Community College Survey of Student Engagement,
and HERI’s Your First College Year and College Senior Survey. Although
comprehensive in their collection on the extent of students’ engagement
with those practices, the surveys do not focus on evidence-based STEM
educational practices and programs as defined by the committee. In addi-
tion, these surveys have a number of other limitations: in particular, most of
them include only a small number of participating institutions so the result-
ing data are not necessarily representative of the national universe of either
2-year or 4-year institutions and their students (see Chapter 6). In addi­tion,
as noted above, these surveys and the resulting data are proprietary, which
may delay or restrict access to them to use in the proposed indicator, and
students self-report their participation in these activities, which may intro-
duce substantial measurement error. This concern is underscored by the
likelihood that students completing any of these instruments may not share
a common definition of high-impact practices, and the respective surveys
do not offer respondents the opportunity to associate participation in a
practice with a particular discipline or field of study (e.g., STEM).

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

162 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Data and Research Needs


As noted in the previous chapter, additional research is needed to de-
velop common definitions of evidence-based STEM educational practices
and programs and to identify the specific characteristics and components
of those experiences and programs that are critical to advancing student
outcomes in STEM. That research could lead to new elements for inclu-
sion in existing federal surveys. Alternatively, the federal government could
contract with one or more of the existing survey organizations to include
new elements in their existing surveys and administer the revised surveys to
nationally representative groups of 2-year and 4-year institutions.

Indicator 2.2.1:
Diversity of STEM Degree and Certificate Earners in Comparison
with Diversity of Degree and Certificate Earners in All Fields

Data Available and Potentially Available


The IPEDS Completion Survey can provide data to measure this indica-
tor. Completions can be disaggregated by type of credential, field of creden-
tial, institutional type, gender, and race and ethnicity. These opportunities
for disaggregation represent an important starting point for measuring this
indicator. However, the IPEDS Completion Survey does not allow disaggre-
gation by all dimensions in the committee’s broad definition of diversity, in-
cluding persons with disabilities, socioeconomic status, and first-generation
status. In addition, the level of granularity allows disaggregation only by
broad categories of race and ethnicity (e.g., Asian, Hispanic). More refined
categories of race and ethnicity (e.g., Southeast Asian American, Chinese
American, Mexican American, Cuban American) would offer important in-
sight into the nation’s progress toward achieving representational diversity
in earners of STEM credentials.

Indicator 2.2.2:
Diversity of Transfers from 2-Year to 4-Year STEM Programs in
Comparison with Diversity of Students in 2-Year STEM Programs

Data Available and Potentially Available


The data in IPEDS do not offer the possibility of exploring the char-
acteristics of students who transfer to 4-year colleges and universities from
2-year institutions. However, data submitted to and reported by the NSC
may prove useful in measuring this indicator. Since more than 3,600 2-year
and 4-year colleges and universities participate, NSC data cover the vast

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

EXISTING DATA SOURCES AND MONITORING SYSTEMS 163

majority of postsecondary students in the United States. Term-to-term en-


rollment data include information about students’ degree program, their
fields of study, and the institution(s) in which they enrolled. Therefore,
measures as to the proportion of students who transfer from 2-year institu-
tions to STEM-focused 4-year programs could be derived from NSC data.
Although NSC data appear promising, they have significant limitations
with respect to the availability of demographic characteristics; for example,
they have limited coverage of students’ gender, race and ethnicity, and Pell
status.8 Thus, new data or changes in the data voluntarily submitted by
institutions to NSC would be required before the data could be used to
inform this indicator.

Indicator 2.2.3:
Time-to-Degree for Students in STEM Academic Programs

Data Available and Potentially Available


Only limited data are currently available for this indicator. Data from
IPEDS do not offer the possibility of isolating 2-, 3-, 4-, or 6-year gradua-
tion rates for specific disciplines, and graduation rates in IPEDS are cohort
based and restricted to first-time, full-time students. NSC data could inform
the measurement of this indicator, since they identify the first term a student
is enrolled in a degree program and track the number of terms in which the
student designated a STEM major/focus for that particular credential. Since
there is broad institutional participation in NSC, its data may be able to
account for students who earn a STEM credential after enrolling at multiple
postsecondary institutions.
As discussed above, NSC data have several limitations, including lim-
ited opportunities for disaggregation across student demographic charac-
teristics, incomplete coverage of NSC’s participating institutions, and their
proprietary nature. These limitations could lead to inaccurate or imperfect
measures for this indicator. Further research and development would be
needed to analyze the nature and scope of these limitations and identify
and implement strategies to address them before NSC data could provide
accurate measures for this indicator.

8 Because Pell grants are given to low-income students, data on their recipients can be used

to roughly measure low-income students.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

164 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Indicator 2.3.1:
Diversity of STEM Instructors in Comparison with the
Diversity of STEM Graduate Degree Holders

Indicator 2.3.2:
Diversity of STEM Graduate Student Instructors in Comparison
with the Diversity of STEM Graduate Students

Data Available and Potentially Available


Currently, no nationally representative data are available to support
these two indicators. Although data on the diversity of postsecondary faculty
and enrolled graduate students are available in IPEDS’ Human Resources
Survey and Enrollment Survey, respectively, they only allow for the disaggre-
gation of faculty by classification (e.g., instructional faculty, research faculty),
race and ethnicity, gender, and employment status (i.e., full time, part time).
These IPEDS data do not provide any information regarding instructors’
departmental unit, field of study, or highest degree earned, so they cannot
currently be used to measure these indicators. (As noted above, NCES has
discontinued the National Study of Postsecondary Faculty.)
As discussed above, the HERI Faculty Survey represents the most com-
prehensive and representative data source on postsecondary faculty, but
those data have several limitations. The survey’s sample typically includes
few, if any, 2-year institutions, and part-time faculty at both 2-year and
4-year institutions are not well represented. In addition, the data are propri-
etary, which may delay or impede efficient, timely access. With these limita-
tions in mind, however, the Faculty Survey data do provide opportunities
to examine the diversity of faculty teaching at 4-year institutions in STEM
or STEM-related departments. The data can be analyzed across academic
rank, tenure status, employment status, race and ethnicity, and gender.

Indicator 2.4.1:
Students Pursuing STEM Credentials Feel Included and
Supported in Their Academic Programs and Departments

Indicator 2.4.2:
Instructors Teaching Courses in STEM Disciplines Feel
Included and Supported in Their Departments

Data Available and Potentially Available


No nationally representative data are currently available to inform
these two indicators, although there are several surveys that include ques-

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

EXISTING DATA SOURCES AND MONITORING SYSTEMS 165

tions on institutional climate. One, the Diverse Learning Environments


Survey9 includes a validated sense-of-belonging measure based on four
survey items: “I feel a sense of belonging to this college.” “I see myself as
a part of the campus community.” “I feel I am a member of this college.”
“If asked, I would recommend this college to others.” The survey also asks
students the extent to which they agree that “faculty are approachable”
in their academic program and that the respondent has “a peer support
network among students” in their major. These data can be disaggregated
by demographic characteristics and major, allowing for the examination of
racial and ethnic or disciplinary differences.
The second indicator above can be measured with institutional climate
data collected through surveys of faculty. Several national surveys (e.g.,
the HERI Faculty Survey and the Faculty Survey of Student Engagement
administered by Indiana University) contain measures of teaching practices,
satisfaction, and, in the case of HERI’s survey, sources of stress. HERI’s Fac-
ulty Survey, for example, includes items related to a faculty member’s sense
of satisfaction with the collegiality of faculty in her or his department and
a set of items about various sources of stress, including one that relates to
stress due to “discrimination (e.g., prejudice, racism, sexism, homophobia,
transphobia).”
Leveraging any measure that relies on individuals’ responses to surveys
has several limitations. Generalizability and nonresponse bias will be of
concern if effective sampling strategies are not used or response rates to
the survey are low. In addition, surveys represent a snapshot in time, so re-
sponses may be affected by recent events (in the world, institution, depart-
ment, or one’s personal life). Surveys on institutional climate generally have
had limited participation, and institutions often conduct such assessments
in reaction to an incident of bias or discrimination on campus.

Data and Research Needs


Extensive survey research and development would be required to fully
develop these two indicators, including design of nationally representative
sampling frames and development of effective sampling strategies to address
the problem of limited participation in institutional climate surveys.

9 This survey is administered by the HERI and the Culturally Engaging Campus Environ-

ments project at Indiana University.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

166 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Indicator 2.4.3:
Institutional Practices Are Culturally Responsive,
Inclusive, and Consistent across the Institution

Data Available and Potentially Available


Currently, there is no data source that catalogs institutions’ and STEM
departments’ use of culturally responsive practices on a nationwide basis.
Most of the research in this area is qualitative in nature and seeks to de-
scribe these practices and the mechanisms by which they effectively support
STEM learning and persistence of diverse student populations. Similarly,
there is no source with comprehensive data on the type of training and
professional development available to faculty search committees, nor the
extent to which search committees engage in this training or development.

Data and Research Needs


Additional research is needed to identify and establish common defi-
nitions of what instructional and educational practices qualify as being
culturally responsive. The ADVANCE program at the National Science
Foundation10 can serve as a valuable source of information about practices
that are effective in building and retaining diverse STEM faculty; additional
surveys and databases would need to be developed in order to provide a
systematic, nationwide measure of these practices.

Indicator 3.1.1:
Completion of Foundational Courses, Including Developmental
Education Courses, to Ensure STEM Program Readiness

Data Available and Potentially Available


As described above, the BPS can provide information for this indicator.
Data currently available from the full BPS 04/09 dataset can support the
measurement of this indicator through the associated transcript records.
To measure retention and completion of students in developmental educa-
tion courses designed to ensure STEM readiness, analysts can divide the
number of completed developmental STEM courses by the total number of
such courses attempted in order to calculate the percentage of attempted
courses that students finished. Given the diversity of students and institu-
tions represented in this dataset, this proportion can be disaggregated by
10 ADVANCE: Increasing the Participation and Advancement of Women in Academic Sci-

ence and Engineering Careers. See https://www.nsf.gov/funding/pgm_summ.jsp?pims_id=5383


[August 2017].

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

EXISTING DATA SOURCES AND MONITORING SYSTEMS 167

institutional type, as well as by students’ demographic characteristics. A


similar proportion can be calculated to measure the proportion of students
who complete foundational courses (college-level mathematics, English,
science, and technology) that are designed to prepare students for STEM
program readiness.

Data and Research Needs


Several challenges would need to be addressed when considering the use
of these data for this indicator. First, the BPS is conducted only periodically,
so data for national indicators would not always be current. Second, the
study design includes sampling of both institutions and students in institu-
tions. In order to develop a national indicator with this design, it would be
necessary to apply statistical population weights to the dataset, but not all
institutions or all students would be represented. Third, one would have
to determine which courses across such diverse institutions count as either
developmental or foundational STEM preparation courses. Chen (2016)
began to address the third challenge, using the BPS 04/09 data to track the
progress of students who enrolled in at least one developmental education
course through their credential completion. The author identified devel-
opmental courses on the basis of course titles and content descriptions. In
general, courses described with terms like developmental, remedial, pre-
collegiate, and basic skills were considered developmental (Chen, 2016).
A national student unit record data system could address the challenge
that BPS is administered only periodically. Such a system would compile
course histories for all postsecondary students nationwide to provide uni-
versal coverage for this indicator (see Chapter 7).

Indicator 3.2.1:
Retention in STEM Degree or Certificate Programs,
Course to Course and Year to Year

Data Available and Potentially Available


As described above for Indicator 3.1.1, the BPS 04/09 dataset can
provide a solid foundation to measure average course completion rates in
STEM. Dividing the number of STEM-related courses that students com-
pleted or passed by the number of STEM-related courses they attempted
during a given period provides overall statistics, which can be disaggregated
by institutional characteristics, as well as by student demographic and en-
rollment characteristics.
Given its focus on a single cohort of students, the BPS 04/09 data can
also provide information on year-to-year retention rates in STEM degree

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

168 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

programs. Such a measure can be derived by taking the number of students


intending to pursue a credential with a STEM focus as the baseline of
students with an initial interest in STEM and then using this total as the
denominator for calculating retention rates in subsequent terms. STEM re-
tention rates would be calculated by removing from the baseline total any
baseline students who switch to non-STEM credential programs at the end
of any specified time and dividing this difference by the original baseline
total. This proportion can then be disaggregated by specific credentials
(e.g., associate’s degrees, bachelor’s degrees), institutional characteristics,
or student demographic or enrollment characteristics.

Data and Research Needs


The same challenges to BPS 04/09 identified for Indicator 3.1.1 apply
for this indicator with respect to the relative infrequency of data collection
and selective coverage. More frequent data collection for new cohorts is
needed to provide an updated indicator of progress toward the objective
of successful navigation into and through STEM programs of study. One
alternative with different challenges would be data from HERI’s Freshman
Survey, matched with NSC’s enrollment and completion data. The Fresh-
man Survey asks students to report their intended major and is administered
annually, and its coverage of 4-year institutions matches or is more com-
plete than that of BPS. However, it oversamples first-time, full-time students
and does not provide adequate coverage of students in 2-year institutions.
Expanding the survey to include representative samples of 2-year students
would address this problem.
The NSC provides term-to-term enrollment information and increas-
ingly includes students’ academic degree programs, as well as each creden-
tial and its associated field or discipline. The NSC covers 2-year and 4-year
institutions but it is voluntary: institutions elect to submit their data. If NSC
added student-level attributes (e.g., race and ethnicity, Pell grant status as
a proxy for socioeconomic status, full coverage of field or discipline associ-
ated with degree program) and fuller and more comprehensive participa-
tion among and coverage of postsecondary institutions, the data could be
merged with IPEDS data to provide opportunities for disaggregation by
institutional characteristics. Such a combination could provide data suf-
ficient to calculate year-to-year program retention and completion rates in
STEM-related fields.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

EXISTING DATA SOURCES AND MONITORING SYSTEMS 169

Indicator 3.2.2:
Transfers from 2-Year to 4-Year STEM Programs in
Comparison with Transfers to All 4-Year Programs

Data Available and Potentially Available


The NSC’s term-to-term enrollment data, which increasingly has infor-
mation about the academic focus of students’ degree program, may have
sufficient data for this indicator. To calculate this figure, analysts might
take the number of students from 2-year institutions who transfer to 4-year
institutions and who declare a STEM-related major each term and divide
that number by the total number of students who transferred from a 2-year
to a 4-year institution in any given academic term or time period.

Data and Research Needs


As described above, the NSC does not have complete data from all
participating institutions (e.g., academic field is not available for every
term for all institutions), and it does not have complete coverage of all
post­secondary institutions. Additional coverage of data elements (e.g., aca-
demic field) and broader coverage of 2-year and 4-year institutions would
be needed for an accurate indicator.

Indicator 3.3.1:
Percentage of Students Who Attain STEM Credentials
over Time, Disaggregated by Institution Type, Transfer
Status, and Demographic Characteristics

Data Available and Potentially Available


The IPEDS Completion Survey provides comprehensive coverage of the
credentials that students earn and the discipline or field of study associated
with those credentials for all postsecondary institutions that receive Title IV
funding.11 Data are collected at 100 percent, 150 percent, and 200 percent
of expected completion time and are broken down by race and gender. The
system provides information on the number and type of credentials (certifi-
cates, associate’s degrees, and bachelor’s degrees) by discipline or field, as
measured by classification of instructional program codes.12 This level of
granularity can provide analysts and policy makers with ample flexibility

11 As noted above, almost all U.S. institutions receive Title IV funds.
12 See https://nces.ed.gov/ipeds/cipcode/Default.aspx?y=55 [August 2017].

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

170 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

in determining which credentials “count” and which disciplines and fields


of study represent STEM.
Importantly, the IPEDS Completion Survey collects data on all cre-
dentials earned in a given year, which helpfully avoids the issues related to
cohorts and enrollment status (i.e., full time or part time) in IPEDS data
on graduation rates. In addition, the credential counts reported by IPEDS
can be disaggregated by gender, race and ethnicity, and institutional type.

Data and Research Needs


Currently, IPEDS does not allow disaggregation of credentials by stu-
dents’ disability status and socioeconomic status; however, since institutions
do report information related to students’ receipt of Pell grants, institu-
tions could be asked to submit data regarding credentials awarded to Pell
recipients, which would further support measurement of this indicator.
Given current data coverage, flexibility in defining the fields and disciplines
constituting STEM, the ability to measure specific types of credentials, and
possibilities of disaggregation, the IPEDS Completion Survey represents the
best source of data for this indicator.

SUMMARY AND CONCLUSIONS


The committee considered whether, and to what extent, various federal
and proprietary data sources are representative of the national universe of
2-year and 4-year institutions and included data relevant to the goals and
objectives in the conceptual framework and, more specifically, to the com-
mittee’s proposed indicators.
Focusing first on federal data sources and systems, the committee re-
viewed the IPEDS, which includes high-quality, current data related to the
committee’s goals and objectives, including detailed annual data on comple-
tion of degrees and certificates in different fields of study. However, these
data focus on students who start at an institution and also graduate from
it, thus not covering students who transfer or attend multiple institutions,
as well as part-time students.

CONCLUSION 2  To monitor the status and quality of undergradu-


ate STEM education, federal data systems will need additional data on
full-time and part-time students’ trajectories across, as well as within,
institutions.

Although they are conducted less frequently than IPEDS institutional


surveys, federal longitudinal surveys of student cohorts, such as the BPS
04/09 study, provide useful data related to the committee’s goals and objec-

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

EXISTING DATA SOURCES AND MONITORING SYSTEMS 171

tives. The survey samples are carefully designed to be nationally representa-


tive, and multiple methods are used to obtain strong response rates. The
resulting data can be used to track students’ trajectories across institutions
and fields of study, including STEM fields. Previously, the discontinued
National Study of Postsecondary Faculty provided data related to the
committee’s proposed indicators, including faculty members’ disciplinary
backgrounds, responsibilities, and attitudes.

CONCLUSION 3  To monitor the status and quality of undergraduate


STEM education, recurring longitudinal surveys of faculty and students
are needed.

The committee found that IPEDS and other federal data sources gen-
erally allow data to be disaggregated by students’ race and ethnicity and
gender. However, conceptions of diversity have broadened to include ad-
ditional student groups that bring unique strengths to undergraduate STEM
education and may also encounter unique challenges. To fully support the
indicators, federal data systems will need to include additional student
characteristics.

CONCLUSION 4  To monitor progress toward equity, diversity, and


inclusion of STEM students and faculty, national data systems will
need to include demographic characteristics beyond gender and race
and ethnicity, including at least disability status, first-generation student
status, and socioeconomic status.

The committee also reviewed the many new, proprietary data sources
that have been developed over the past two decades in response to growing
accountability pressures in higher education. Although not always nation-
ally representative of 2-year and 4-year public and private institutions,
some of these sources include large samples of institutions and address the
committee’s goals and objectives.
Based on its review of existing public and proprietary data sources,
the committee considered research needs and data availability for each of
the 21 proposed indicators. It found that, for some indicators, further re-
search is needed to develop clear definitions and measurement approaches,
and overall, the availability of data for the indicators is limited. For some
indicators, nationally representative datasets are available, but when these
data are disaggregated, first to focus on STEM students and then to focus
on specific groups of STEM students, the sample sizes become too small
for statistical significance. For other indicators, no data are available from
either public or proprietary sources.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

172 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

CONCLUSION 5  The availability of data for the indicator system is


limited, and new data collection is needed for many of them:

• No data sources are currently available for most of the indicators


of engaging students in evidence-based STEM educational practices
(Goal 1).
• Various data sources are available for most of the indicators of
equity, diversity, and inclusion (Goal 2). However, these sources
would need enhanced coverage of institutions and students to be
nationally representative, along with additional data elements on
students’ fields of study.
• Federal data sources are available for some of the indicators of
ensuring adequate numbers of STEM professionals (Goal 3).
However, federal surveys would need larger institutional and stu-
dent samples to allow finer disaggregation of the data by field of
study and demographic characteristics.

REFERENCES
Armstrong, J., and Zaback, K. (2016). Assessing and Improving State Postsecondary Data
Systems. Washington, DC: Institute for Higher Education Policy. Available: http://www.
ihep.org/sites/default/files/uploads/postsecdata/docs/resources/state_postsecondary_data_
systems-executive_summary.pdf [June 2016].
Berk, R.A. (2005). Survey of 12 strategies for measuring teaching effectiveness. International
Journal on Teaching and Learning in Higher Education, 17(1), 48–62.
Brancaccio-Taras, L., Pape-Lindstrom, P., Peteroy-Kelly, M., Aguirre, K., Awong-Taylor, J.,
Balser, R., Cahill, M.J., Frey, R.G., Jack, R., Kelrick, M., Marley, K., Miller, K.G.,
­Osgood, M., Romano, S., Uzman, J.A., and Zhao, J. (2016). The PULSE vision & change
rubrics, version 1.0: A valid and equitable tool to measure transformation of life sciences
departments at all institution types. CBE-Life Sciences Education, 15(4), art 60. Avail-
able: http://www.lifescied.org/content/15/4/ar60.full [March 2017].
Brick, J.M., and Williams, D. (2013). Explaining rising nonresponse rates in cross-sectional
surveys. The ANNALS of the American Academy of Political and Social Science, 645(1),
36–59.
Burns, S., Wang, X., and Henning, A. (Eds.). NCES Handbook of Survey Methods. (NCES
2011-609). Washington, DC: U.S. Department of Education, National Center for Edu-
cation Statistics. Available: https://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2011609
[July 2017].
Campbell, C.M., and Cabrera, A.F. (2014). Making the mark: Are grades and deep learning
related? Research in Higher Education, 55(5), 494–507.
Cataldi, E.F., Fahimi, M., and Bradburn, E.M. (2005). 2004 National Study of Postsecond-
ary Faculty (NSOPF:04) Report on Faculty and Instructional Staff in Fall 2003. (NCES
2005-172). Washington, DC: U.S. Department of Education, National Center for Educa-
tion Statistics. Available: http://nces.ed.gov/pubs2005/2005172.pdf [July 2016].
Center for Postsecondary Research. (2017a). About: FSSE. Bloomington: Indiana University
Center for Postsecondary Research. Available: http://fsse.indiana.edu/html/about.cfm
[June 2017].

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

EXISTING DATA SOURCES AND MONITORING SYSTEMS 173

Center for Postsecondary Research. (2017b). About: NSSE. Bloomington: Indiana University
Center for Postsecondary Research. Available: http://nsse.indiana.edu/html/about.cfm
[June 2017].
Chen, X. (2016). Remedial Coursetaking at U.S. Public 2- and 4-Year Institutions: Scope,
Experiences, and Outcomes. (NCES 2016-405). Washington, DC: U.S. Department
of Education, National Center for Education Statistics. Available: https://nces.ed.gov/
pubs2016/2016405.pdf [July 2017].
Chen, X., and Soldner, M. (2013). STEM Attrition: College Students’ Paths Into and Out of
STEM Fields. Washington, DC: U.S. Department of Education.
Complete College America. (2014). Four-Year Myth: Make College More Affordable, Restore
the Promise of Graduating on Time. Indianapolis, IN: Author.
Cunningham, A.F., and Milam, J. (2005). Feasibility of a Student Unit Record System Within
the Integrated Postsecondary Education Data System. (NCES 2005-160). Washington,
DC: U.S. Department of Education, National Center for Education Statistics.
Drinkwater, M.J., Matthews, K.E., and Seiler, J. (2017). How is science being taught? Mea-
suring evidence-based teaching practices across undergraduate science departments.
CBE Life Sciences Education, 16(1), ar18, 1-11. Available: http://www.lifescied.org/
content/16/1/ar18.full.pdf [October 2017].
Dynarski, S.M., Hemelt, S.W., and Hyman, J.M. (2013). The Missing Manual: Using National
Student Clearinghouse Data to Track Postsecondary Outcomes. (Working Paper No.
W9552). Cambridge, MA: National Bureau of Economic Research. Available: http://
www.nber.org/papers/w19552 [September, 2017].
Eagan, K., Hurtado, S., Figueroa, T., and Hughes, B. (2014a). Examining STEM Pathways
Among Students Who Begin College at Four-Year Institutions. Paper prepared for the
Committee on Barriers and Opportunities in Completing 2- and 4-Year STEM De-
grees. Washington, DC. Available: http://sites.nationalacademies.org/cs/groups/dbassesite/
documents/webpage/dbasse_088834.pdf [April 2015].
Eagan, M.K., Stolzenberg, E.B., Berdan Lozano, J., Aragon, M.C., Suchard, M.R., and
Hurtado, S. (2014b). Undergraduate Teaching Faculty: The 2013–2014 HERI Faculty
Survey. Los Angeles: Higher Education Research Institute, University of California,
Los Angeles. Available: http://heri.ucla.edu/monographs/HERI-FAC2014-monograph.
pdf [August 2016].
Eagan, M.K., Stolzenberg, E.B., Ramirez, J.J., Aragon, M.C., Suchard, M.R., and Rios-Aguilar,
C. (2016). The American Freshman: Fifty-Year Trends, 1966–2015. Los Angeles: Higher
Education Research Institute, University of California, Los Angeles. Available: http://
www.heri.ucla.edu/monographs/50YearTrendsMonograph2016.pdf [August 2016].
Estrada, M. (2014). Ingredients for Improving the Culture of STEM Degree Attainment with
Co-curricular Supports for Underrepresented Minority Students. Paper prepared for the
Committee on Barriers and Opportunities in Completing 2- and 4-Year STEM Degrees.
Available: http://sites.nationalacademies.org/cs/groups/dbassesite/documents/webpage/
dbasse_088832.pdf [July 2017].
Executive Office of the President. (2015). Using Federal Data to Measure and Improve the
Performance of U.S. Institutions of Higher Education. Washington, DC: Executive
­Office of the President. Available: https://collegescorecard.ed.gov/assets/UsingFederalData
ToMeasureAndImprovePerformance.pdf [February 2018].
Fosnacht, K., Sarraf, S., Howe, E., and Peck, L. (2017). How important are high response rates
for college surveys? Review of Higher Education, 40(2), 245–265.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

174 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Ginder, S., Kelly-Reid, J.E., and Mann, F.B. (2017). Graduation Rates for Selected Cohorts,
2008-2013; Outcome Measures for Cohort Year 2008; Student Financial Aid, Academic
Year 2015-2016; and Admissions in Postsecondary Institutions, Fall 2016: First Look
(Preliminary Data) (NCES 2017-150). U.S. Department of Education, Washington,
DC: National Center for Education Statistics. Available: https://nces.ed.gov/pubsearch/
pubsinfo.asp?pubid=2017150 [November 2017].
Handelsman, J., and Ferrini-Mundy, J. (2016). STEM Education: Cross-Agency Priority Goal
Quarterly Progress Update, FY2016 Quarter 1. Washington, DC: Office of Science and
Technology Policy.
HCM Strategists. (2013). A Better Higher Education Data and Information for Informing
Policy: The Voluntary Institutional Metrics Project. Washington, DC: HCM Strate-
gists. Available: http://hcmstrategists.com/wp-content/themes/hcmstrategists/docs/gates_
metrics_report_v9.pdf [January 2017].
Jenkins, D., and Fink, J. (2016). Tracking Transfer: New Measures of Institutional and State
Effectiveness in Helping Community College Students Attain Bachelor’s Degrees. New
York: Community College Research Center, Columbia University. Available: http://ccrc.
tc.columbia.edu/media/k2/attachments/tracking-transfer-institutional-state-effectiveness.
pdf [September 2017].
Khan, B. (2016). Overview of Science and Engineering Indicators 2016. Presentation to the
Committee on Developing Indicators for Undergraduate STEM Education, February 22.
Available: http://sites.nationalacademies.org/cs/groups/dbassesite/documents/webpage/
dbasse_171321.pdf [June 2016].
Knapp, L.G., Kelly-Reid, J.E., and Ginder, S.A. (2012). Enrollment in Postsecondary Institu-
tions, Fall 2011; Financial Statistics, Fiscal Year 2011; and Graduation Rates, Selected
Cohorts, 2003–2008: First Look. (Provisional data, NCES 2012-174). Washington, DC:
U.S. Department of Education, National Center for Education Statistics.
Manduca, C.A., Iverson, E.R., Luxenberg, M., Macdonald, R.H., McConnell, D.A., Mogk,
D.W., and Tewksbury, B.J. (2017). Improving undergraduate STEM education: The ef-
ficacy of discipline-based professional development. Science Advances, 3(2), 1–15.
Miller, B. (2016). Building a Student-level Data System. Washington, DC: Institute for Higher
Education Policy. Available: http://www.ihep.org/sites/default/files/uploads/postsecdata/
docs/resources/building_a_student-level_data_system.pdf [June 2017].
National Academies of Sciences, Engineering, and Medicine. (2016). Barriers and Opportuni-
ties for 2-Year and 4-Year STEM Degrees: Systemic Change to Support Students’ Diverse
Pathways. Washington, DC: The National Academies Press.
National Academies of Sciences, Engineering, and Medicine. (2017). Undergraduate Research
Experiences for STEM Students: Successes, Challenges, and Opportunities. Washington,
DC: The National Academies Press.
National Center for Education Statistics. (2012). 2012 Revision of NCES Statistical Standards:
Final. Washington, DC: National Center for Education Statistics. Available: https://nces.
ed.gov/statprog/2012/ [September 2017].
National Center for Education Statistics. (2014). Integrated Postsecondary Education Data
System (IPEDS). Available: http://nces.ed.gov/statprog/handbook/pdf/ipeds.pdf [June
2016].
National Center for Education Statistics. (2015). Table 326.20. Digest of Educational Sta-
tistics. Available: http://nces.ed.gov/programs/digest/d14/tables/dt14_326.20.asp [June
2016].
National Research Council. (2012). Discipline-Based Education Research: Understanding and
Improving Learning in Undergraduate Science and Engineering. Washington, DC: The
National Academies Press.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

EXISTING DATA SOURCES AND MONITORING SYSTEMS 175

National Research Council. (2014). Capturing Change in Science, Technology, and Innovation:
Improving Indicators to Inform Policy. Washington, DC: The National Academies Press.
Available: http://www.nap.edu/catalog/18606/capturing-change-in-science-­technology-
and-innovation-improving-indicators-to [June 2016].
National Science Foundation. (2016). Science and Engineering Indicators 2016. Arlington,
Virginia: National Science Foundation. Available: https://www.nsf.gov/statistics/2016/
nsb20161/#/ [February 2018].
National Student Clearinghouse. (2016a). Notes from the Field #3: Research Center Notes on
Letter of Sara Goldrick-Rab and Douglas N. Harris, University of Wisconsin-Madison.
Herndon, VA: Author. Available: https://nscresearchcenter.org/workingwithourdata/
notesfromthefield-3 [July 2016].
National Student Clearinghouse. (2016b). Who We Are. Herndon, VA: Author. Available:
http://www.studentclearinghouse.org/about [June 2016].
Piburn, M. and Daiyo, S. (2000). Reformed Teaching Observational Protocol (RTOP) Refer-
ence Manual. Available: http://files.eric.ed.gov/fulltext/ED447205.pdf [September 2017].
Porter, S.R. (2013). Self-reported learning gains: A theory and a test of college student survey
response. Research in Higher Education, 54(2), 201–226.
PULSE Fellows. (2016). The PULSE Vision and Change Rubrics Version 2.0. Available:
http://api.ning.com/files/Kfu*MfW7V8MYZfU7LNGdOnG4MnryzUgUpC2IxdtUmucn
B4QNCdLaOwWGoMoULSeKw8hF9jiFdh75tlzuv1nqtfCuM11hNPp3/PULSERubrics-
Packetv2_0_FINALVERSION.pdf [May 2017].
Sackett, P. (2012). Faking in personality assessments: Where do we stand? In M. Zieger, C.
MacCann, and R.D. Roberts (eds). New Perspectives in Faking in Personality Assessment
(pp. 330-344). New York: Oxford Univeristy Press.
Seymour, E., Wiese, D., Hunter, A., and Daffinrud, S.M. (2000). Creating a Better Mousetrap:
On-line Student Assessment of Their Learning Gains. Paper presentation at the National
Meeting of the American Chemical Society, San Francisco, CA, March 27.
Smith, M.K., Jones, F.H.M., Gilbert, S.L., and Wieman, C.E. (2013). The classroom obser-
vation protocol for undergraduate STEM (COPUS): A new instrument to characterize
university STEM classroom practices. CBE Life Sciences. Available: http://www.lifescied.
org/content/12/4/618.full [June 2016].
Van Noy, M., and Zeidenberg, M. (2014). Hidden STEM Knowledge Producers: Community
Colleges’ Multiple Contributions to STEM Education and Workforce Development.
Paper Prepared for the Committee on Barriers and Opportunities in Completing 2- and
4-Year STEM Degrees. Available: http://sites.nationalacademies.org/cs/groups/dbassesite/
documents/webpage/dbasse_088831.pdf [June 2017].
Walter, E.M., Beach, A.L., Henderson, C., and Williams, C.T. (2016). Describing instruc-
tional practice and climate: Two new instruments. In G.C. Weaver, W.D. Burgess, A.L.
­Childress, and L. Slakey (Eds.), Transforming Institutions: Undergraduate STEM Educa-
tion for the 21st Century. West Lafayette, IN: Purdue University Press.
Whitfield, C., and Armstrong, J. (2016). The State of State Postsecondary Data Systems:
Strong Foundations 2016. Boulder, CO: State Higher Education Executive Officers. Avail-
able: http://www.sheeo.org/sites/default/files/publications/SHEEO_StrongFoundations
2016_FINAL.pdf [June 2016].
Wieman, C., and Gilbert, S. (2014). The teaching practices inventory: A new tool for charac-
terizing college and university teaching in mathematics and science. CBE-Life Sciences
Education, 13(3), 552-569.
William T. Grant Foundation, Spencer Foundation, and Bill & Melinda Gates Foundation
(2014). Measuring Instruction in Higher Education: Summary of a Convening. New
York: William T. Grant Foundation. Available: http://wtgrantfoundation.org/library/
uploads/2015/11/Measuring-Instruction-in-Higher-Education.pdf [October 2017].

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

176 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Wine, J., Janson, N., and Wheeless, S. (2011). 2004/09 Beginning Postsecondary Students Lon-
gitudinal Study (BPS:04/09) Full-scale Methodology Report. (NCES 2012-246). Wash-
ington, DC: National Center for Education Statistics, Institute of Education Sciences,
U.S. Department of Education. Available: https://nces.ed.gov/pubs2012/2012246_1.pdf
[June 2017].

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Implementing the Indicator System

A
s detailed in Chapter 6, nationally representative data are not cur-
rently available from public or proprietary sources for most of the
committee’s proposed indicators. This limits policy makers’ ability
to track progress toward the committee’s goals of (1) increasing students’
mastery of STEM concepts and skills; (2) striving for equity, diversity, and
inclusion; and (3) ensuring adequate numbers of STEM professionals. That
chapter outlines steps toward developing each of the 21 indicators by revis-
ing various public and proprietary data sources to provide the data needed
for each one.
This chapter aims to reduce the complexity of implementing the indica-
tor system by presenting three options for obtaining the data required for
all of the indicators: (a) creating a national student unit record data system;
(b) expanding National Center for Education Statistics (NCES) data col-
lections; and (c) combining existing data from nonfederal sources. It also
discusses new data collection and analysis systems that could potentially be
used in the future to support the proposed indicator system. The chapter
ends with the committee’s conclusion about moving forward, including a
caution about the intended use of the proposed indicator system.

OPTION 1: CREATE A NATIONAL STUDENT


UNIT RECORD DATA SYSTEM
A national student unit record data system incorporating administra-
tive data on individual students would provide reliable and usable data for
many of the proposed indicators focused on students’ progress through

177

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

178 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

STEM programs. Relative to the current Integrated Postsecondary Educa-


tion Data System (IPEDS), such a system would incorporate more accurate
and complete data, As noted in Chapter 6, IPEDS data do not include
students who transfer, those who attend multiple institutions, and those
who enroll part time; a student unit record data system would include
all of these groups.1 In addition, because IPEDS consists of aggregated,
institution-level data on cohorts of students, it cannot be used to track the
academic progress of individual students, or different groups of students,
over time. In contrast, a student unit record data system is well suited for
such longitudinal monitoring. Data from such a system could be aggregated
to monitor the progress and outcomes of intended STEM majors among
students of different genders, racial and ethnic groups, socioeconomic back-
grounds, disability status, and Pell grant eligibility (socioeconomic) status.
A student unit record data system would allow longitudinal analyses of
trends over time for 8 of the 21 proposed indicators:

• Indicator 2.1.2: Entrance to and persistence in STEM academic


programs
• Indicator 2.2.1: Diversity of STEM degree and certificate earners
in comparison with diversity of degree and certificate earners in all
fields
• Indicator 2.2.2: Diversity of students who transfer from 2-year to
4-year STEM programs in comparison with diversity of students
in 2-year STEM programs
• Indicator 2.2.3 Time to degree for students in STEM academic
programs
• Indicator 3.1.1 Completion of foundational courses, including de-
velopmental education courses, to ensure STEM program readiness
• Indicator 3.2.1: Retention in STEM programs, course to course and
year to year
• Indicator 3.2.2: Transfers from 2-year to 4-year STEM programs
in comparison with transfers to all 4-year programs
• Indicator 3.3.1: Number of students who attain STEM credentials
over time, disaggregated by institute type, transfer status, and de-
mographic characteristics

In this option, the federal government would require institutions to col-


lect and provide to the national system standardized unit record data on
student educational experiences and activities. Currently, the creation of
such a system is prohibited by the 2008 amendments to the Higher Educa-

1 The National Center for Education Statistics has added new survey components to begin

capturing information on part-time and transfer students: see Chapter 6.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

IMPLEMENTING THE INDICATOR SYSTEM 179

tion Act (see Chapter 6). At this time, however, there are bipartisan bills
in Congress (H.R. 2434 and S. 1121, the College Transparency Act) that
would amend the Higher Education Act to repeal the current ban on a
­national student unit record data system and direct the NCES to create such
a system. If the bills became law, NCES, when creating the new system,
could take advantage of the lessons learned from the many state higher
education systems and multi-institution education reform consortia that
have successfully collected and used unit record student data to monitor
undergraduate education.
Creating a national database of student unit records appears to be both
technically and financially feasible and could reduce institutions’ current
burden of reporting IPEDS data, as shown in two feasibility studies.
In 2005, NCES commissioned the first study, to examine the feasibility
of creating a student unit record data system that could replace the ag-
gregated institution-level data included in IPEDS. The study (Cunningham
and Milam, 2005) presented three findings. First, the authors found that
NCES had at the time most of the computing hardware and software neces-
sary to implement such a system, including equipment for web-based data
collection and servers for storing large amounts of student data. However,
to ensure the security and confidentiality of the data, NCES would have to
create a new, permanent database storage system, protected by physical and
software firewalls; the authors did not estimate how much these modifica-
tions would have cost.
Second, the authors found that implementing the new system at that
time would present colleges and universities with technical challenges,
requiring expenditures for new technology, training in the use of the new
reporting system, and personnel. Cunningham and Milam (2005) gathered
estimates of implementation costs from hundreds of people from a variety
of individual institutions, state higher education agencies, and higher educa-
tion associations. The cost estimates varied widely, depending on whether
an institution was currently participating in a state-level student unit record
data system (see Chapter 6) and its information technology and institu-
tional research capabilities. Another key factor was whether an institution
was already uploading student data to the National Student Loan Data
System (NSLDS; see Chapter 6); at the time, nearly all institutions were
doing so. Given these complex factors influencing costs, Cunningham and
Milam (2005) did not estimate an average per-institution cost, but noted
the possibility of providing federal support to defray these costs.
Third, the authors found that institutional costs would eventually de-
cline, partly because some IPEDS reporting would be eliminated.
Cunningham and Milam (2005) concluded that it was technically fea-
sible for most institutions to report student data to a national student unit
record data system, given time for transition. They did not address, how-

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

180 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

ever, whether this new reporting would be financially feasible for participat-
ing colleges and universities.
More recently, Miller (2016) analyzed various approaches to develop-
ing a national student unit record data system to be overseen by the Depart-
ment of Education. Like Cunningham and Milam (2005), the author noted
that nearly all institutions were already reporting data to NSLDS, which
included much of the data needed to track the progress of students receiving
financial aid over time (e.g., enrollments, transfers, field of study, comple-
tions). Because, on average, 70 percent of all students receive financial aid
(Executive Office of the President of the United States, 2017), NSLDS al-
ready includes much of the data needed for a national system. Miller thus
proposed that a national data system could best be created by building on
the existing capability of NSLDS.
Expanding NSLDS to include all data from all students appears tech-
nically feasible, based on the system’s recent history of adding 17 percent
more student records between February 2010 and July 2013. Miller (2016)
estimated that the programming changes to accommodate this growth
would cost around $1 million. Miller cautioned, however, that NSLDS
already has significant technical limitations and a history of poor process-
ing speeds. Adding millions of additional records on students who do not
receive financial aid could slow the system’s ability to perform its core
function of ensuring students receive financial aid and repay their loans.
To address this problem, Miller (2016) proposed a complete modernization
of NSLDS, which would require additional funding; he did not provide a
cost estimate.
In this proposal, NCES would handle access to the student unit record
data system by policy makers, researchers, and the public (Miller, 2016).
At least once a year, a data extract would be transmitted to NCES, which
would be responsible for generating public reports on higher education and
populating IPEDS with data no longer being reported to it. NCES would
also establish and implement protocols for allowing access to the database
while maintaining the privacy and confidentiality of individual student
records.
Miller (2016) argues that moving to the system he proposes would not
burden most institutions with massive, costly changes in their reporting, for
two reasons. First, NSLDS requires institutions to report data on only those
students who receive federal financial aid. Yet many institutions submit data
on all of their students to the National Student Clearinghouse (NSC), which
in turn reports on financial aid recipients to NSLDS on behalf of these
institutions. For this large group of public and private institutions (more
than 3,600 according to NSC; see Chapter 6), moving to a student unit
record system would simply mean passing along data they are already as-
sembling. Second, the student unit record data system would replace seven

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

IMPLEMENTING THE INDICATOR SYSTEM 181

TABLE 7-1  Data for Indicators in Option 1


Objective Indicator Proposed Data Source
1.1 Use of evidence- 1.1.1 Use of evidence-based Renewed and expanded
based STEM educational STEM educational practices NSOPF
practices both in and in course development and
outside of classrooms delivery

1.1.2 Use of evidence-based Renewed and expanded


STEM educational practices NSOPF
outside the classroom

1.2 Existence and use of 1.2.1 Extent of instructors’ Renewed and expanded
supports that help STEM involvement in professional NSOPF
instructors use evidence- development
based learning experiences
1.2.2 Availability of support Renewed and expanded
or incentives for evidence- NSOPF
based course development or
course redesign

continued

components of IPEDS entirely and most of an eighth component. Using


NCES estimates of the average number of hours required to complete each
component in 2015–2016, Miller (2016) estimated that moving to a student
unit record system would reduce reporting time by about 60 percent, saving
around 88.6 hours per institution per year.
The current lack of a student unit record data system makes it difficult
to develop a national picture of and to monitor trends over time among
the nation’s postsecondary students and institutions; this difficulty applies
to undergraduate STEM education.
However, even if the prohibition against such a system is removed, a
national database of student unit records would not provide information
on instructors,2 who play a key role in engaging students in evidence-based
educational experiences inside and outside the classroom, nor on programs
and perceptions related to equity, diversity, and inclusion. Therefore, stu-
dent and instructor surveys about educational experiences and activities
would still be necessary to support the proposed indicator system. Table 7-1
shows how each indicator would be supported under this option.

2 As noted above, the committee uses the term “instructor” to refer to all individuals who

teach undergraduates, including tenured and tenure-track faculty, part-time and adjunct in-
structors, and graduate student instructors.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

182 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

TABLE 7-1 Continued
Objective Indicator Proposed Data Source
1.3 An institutional climate 1.3.1 Use of valid measures Renewed and expanded
that values undergraduate of teaching effectiveness NSOPF
STEM instruction
1.3.2 Consideration of Renewed and expanded
evidence-based teaching NSOPF
in personnel decisions by
departments and institutions

1.4 Continuous No indicators: see


improvement in STEM “Challenges of Measuring
teaching and learning Continuous Improvement”
in Chapter 3.

2.1 Equity of access to 2.1.1 Institutional structures, Extended and expanded BPS
high-quality undergraduate policies, and practices that
STEM educational strengthen STEM readiness
programs and experiences for entering and enrolled
college students

2.1.2 Entrance to and Unit record data system


persistence in STEM
educational programs

2.1.3 Equitable student Extended and expanded BPS


participation in evidence-
based STEM educational
practices

2.2 Representational equity 2.2.1 Diversity of STEM Unit record data system
among STEM credential degree and certificate earners
earners in comparison with diversity
degree and certificate earners
in all fields

2.2.2 Diversity of students Unit record data system


who transfer from 2-year to
4-year STEM programs in
comparison with diversity in
2-year STEM programs

2.2.3 Time to degree for Unit record data system


students in STEM academic
programs

2.3 Representational 2.3.1 Diversity of STEM Revised IPEDS Human


diversity among STEM instructors in comparison Resources Survey
instructors with diversity of STEM
graduate degree holders

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

IMPLEMENTING THE INDICATOR SYSTEM 183

TABLE 7-1 Continued
Objective Indicator Proposed Data Source
2.3.2 Diversity of STEM Revised IPEDS Human
graduate student instructors Resources Survey
in comparison with diversity
of STEM graduate students

2.4 Inclusive environments 2.4.1 Students pursuing Extended and expanded BPS
in institutions and STEM STEM credentials feel
departments included and supported in
their academic programs and
departments

2.4.2 Instructors teaching Renewed and expanded


courses in STEM disciplines NSOPF
feel supported and included
in their departments

2.4.3 Institutional Renewed and expanded


practices that are culturally NSOPF
responsive, inclusive,
and consistent across the
institution

3.1 Foundational 3.1.1 Completion of Unit record data system


preparation for STEM for foundational courses,
all students including developmental
education courses, to ensure
STEM program readiness

3.2 Successful navigation 3.2.1 Retention in STEM Unit record data system
into and through STEM programs, course to course
programs of study and year to year

3.2.2 Transfers from 2-year Unit record data system


to 4-year STEM programs in
comparison with transfers to
all 4-year programs

3.3 STEM credential 3.3.1 Number of students Unit record data system
attainment who attain STEM credentials
over time, disaggregated
by institution type, transfer
status, and demographic
characteristics
NOTES: BPS, Beginning Postsecondary Students Longitudinal Study; IPEDS, Integrated Post-
secondary Education Data System; NSOPF, National Study of Postsecondary Faculty.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

184 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

OPTION 2: EXPAND NCES DATA COLLECTIONS


A currently legal and more feasible approach would rely on IPEDS and
other NCES surveys to support the proposed indicators. As discussed in
Chapter 6, nearly all public and private, for-profit and nonprofit institutions
provide annual data to IPEDS, reporting on the vast majority of students
who are currently enrolled in STEM courses and majors. Using this option
to inform the proposed indicators would tap the capabilities of an estab-
lished and reliable data collection mechanism, although it would provide
less robust data than under Option 1. The usefulness of IPEDS data for
the indicators is limited by the statistical and analytic challenges that result
from using aggregated data as the unit of analysis.
Currently, IPEDS data partly support only 2 of the 21 proposed
indicators:

• Indicator 2.2.1: Diversity of STEM degree and certificate earners


in comparison with diversity of degree and certificate earners in all
fields
• Indicator 3.3.1: Number of students who attain STEM credentials
over time, disaggregated by institution type, transfer status, and
demographic characteristics

The IPEDS data do not fully support these two indicators because they
do not permit the disaggregation of STEM credentials by disability status
and Pell status (a proxy for socioeconomic status).3 Fortunately, states and
voluntary multi-institution data initiatives have developed and implemented
an expanded range of measures that could fill some of the gaps in IPEDS
(see below).
Under this option, NCES (with support from the National Science
Foundation [NSF]) would expand IPEDS institutional surveys to include
institution-level measures of student progress toward degrees and certifi-
cates. In addition, NCES would extend existing student surveys and revive
a major faculty survey. Table 7-2 shows how each indicator would be sup-
ported under this option.
In comparison with the first option, this option would place a greater
burden on institutions of higher education. In Option 1, institutional re-
search staff would only have to upload student unit record data to the
national student unit record system. In this option, institutional research
staff would be required to collect additional data (beyond their current col-
lections) and use it to calculate additional measures for reporting to IPEDS.

3 Beginning in 2017–2018, NCES will gather information on students’ Pell grant status as

part of the new outcome measures survey within IPEDS

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

IMPLEMENTING THE INDICATOR SYSTEM 185

TABLE 7-2  Data for Indicators in Option 2


Objective Indicator Proposed Data Source
1.1 Use of evidence- 1.1.1 Use of evidence-based Renewed and expanded
based STEM educational STEM educational practices NSOPF
practices both in and in course development and
outside of classrooms delivery

1.2.1 Use of evidence-based Renewed and expanded


STEM educational practices NSOPF
outside the classroom

1.2 Existence and use of 1.2.1 Extent of instructors’ Renewed and expanded
supports that help STEM involvement in professional NSOPF
instructors use evidence- development
based learning experiences
1.2.2 Availability of support Renewed and expanded
or incentives for evidence- NSOPF
based course development or
course redesign

1.3 An institutional climate 1.3.1 Use of valid measures Renewed and expanded
that values undergraduate of teaching effectiveness NSOPF
STEM instruction
1.3.2 Consideration of Renewed and expanded
evidence-based teaching NSOPF
in personnel decisions by
departments and institutions

1.4 Continuous No indicators: see


improvement in STEM “Challenges of Measuring
teaching and learning Continuous Improvement”
in Chapter 3.

2.1 Equity of access to 2.1.1 Institutional structures, Extended and expanded BPS
high-quality undergraduate policies, and practices that
STEM educational strengthen STEM readiness
programs and experiences for entering and enrolled
college students

2.1.2 Entrance to and Extended and expanded BPS


persistence in STEM
educational programs

2.1.3 Equitable student Extended and expanded BPS


participation in evidence-
based STEM educational
practices
continued

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

186 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

TABLE 7-2 Continued
Objective Indicator Proposed Data Source
2.2 Representational equity 2.2.1 Diversity of STEM Revised and expanded IPEDS
among STEM credential degree and certificate earners
earners in comparison with diversity
of degree and certificate
earners in all fields

2.2.2 Diversity of students Revised and expanded IPEDS


who transfer from 2-year to
4-year STEM programs in
comparison with diversity in
2-year STEM programs

2.2.3 Time to degree for Revised and expanded IPEDS


students in STEM academic
programs

2.3 Representational 2.3.1 Diversity of STEM Revised IPEDS Human


diversity among STEM instructors in comparison Resources Survey
instructors with diversity of STEM
graduate degree holders

2.3.2 Diversity of STEM Revised IPEDS Human


graduate student instructors Resources Survey
in comparison with diversity
of STEM graduate students

2.4 Inclusive environments 2.4.1 Students pursuing Extended and expanded BPS
in institutions and STEM STEM credentials feel
departments included and supported in
their academic programs and
departments

2.4.2 Instructors teaching Renewed and expanded


courses in STEM disciplines NSOPF
feel supported and included
in their departments

2.4.3 Institutional Renewed and expanded


practices that are culturally NSOPF
responsive, inclusive,
and consistent across the
institution

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

IMPLEMENTING THE INDICATOR SYSTEM 187

TABLE 7-2 Continued
Objective Indicator Proposed Data Source
3.1 Foundational 3.1.1 Completion of Revised and expanded IPEDS
preparation for STEM for foundational courses,
all students including developmental
education courses, to ensure
STEM program readiness

3.2 Successful navigation 3.2.1 Retention in STEM Revised and expanded IPEDS
into and through STEM programs, course to course
programs of study and year to year

3.2.2 Transfers from 2 year- Revised and expanded IPEDS


to 4-year STEM programs in
comparison with transfers to
all 4-year programs

3.3 STEM credential 3.3.1 Number of students Revised and expanded IPEDS
attainment who attain STEM credentials
over time (disaggregated
by institution type, transfer
status, and students’
demographic characteristics)
NOTES: BPS, Beginning Postsecondary Students Longitudinal Study; IPEDS, Integrated Post-
secondary Education Data System; NSOPF, National Study of Postsecondary Faculty.

Overall, this option would require NCES to change its data collection in
three ways: expanding IPEDS, expanding the Beginning Postsecondary Stu-
dents Longitudinal Study (BPS), and renewing and expanding the National
Study of Postsecondary Faculty.

Expanding IPEDS
Under this option, IPEDS surveys would be expanded to require institu-
tions to report on several measures that have been developed and tested in
voluntary data collections by states and higher education reform consortia:
see Box 7-1. These new measures, like the committee’s proposed indicators,
are designed to represent important dimensions of undergraduate education
in a readily understandable form.
Specifically, NCES would expand the IPEDS surveys to include the fol-
lowing measures, as defined by Janice and Voight (2016, p. iv):

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

188 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

• Program of study selection: The percentage of students in a cohort


who demonstrate a program of study selection by taking nine cred-
its (or three courses) in a meta-major in the first year. Meta-majors
include: education; arts and humanities; social and behavioral sci-
ences and human services; science, technology, engineering, and
math (STEM); business and communication; health; and trades.
• Enrollment: a 12-month headcount that includes all undergraduate
students who enroll at any point during the calendar year, disag-
gregated by program of study selection.
• Gateway course completion: The percentage of students complet-
ing college-level, introductory mathematics and English courses
tracked separately in their first year, disaggregated by program of
study selection.

BOX 7-1
New Measures in Higher Education

Seeking data to guide improvement in higher education, states and multi-­


institution reform consortia launched new surveys and developed new measures
of student progress to fill gaps in existing state and federal datasets (Engle, 2016).
But these new data collection efforts were rarely aligned with the older, existing
federal and state data systems, yielding a patchwork of individual, unconnected
data systems. In 2015, with support from the Bill & Melinda Gates Foundation,
the Institute for Higher Education Policy (IHEP) convened a working group of data
experts to discuss ways to improve the quality of higher education data systems
in order to inform state and federal policy conversations. IHEP commissioned
these experts to write a series of papers examining technical, resource, and policy
considerations related to current data collection efforts and data systems, and
offering recommendations for improvement.*
Building on this work, Janice and Voight (2016) recommended the use of ap-
proximately 40 specific performance measures organized around the three high-
level dimensions of performance, efficiency, and equity. The authors proposed that
these measures could frame a comprehensive data system to address important
questions about characteristics of college students, student outcomes, and col-
lege costs. The proposed measures include 20 measures of students’ progression
and completion that are relevant to the committee’s proposed indicators, and they
have been used by institutions, states, and education reform consortia to collect
and interpret data.

*The papers are available at: http://www.ihep.org/postsecdata/mapping-data-landscape/


national-postsecondary-data-infrastructure [August 2017].

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

IMPLEMENTING THE INDICATOR SYSTEM 189

• Retention rate: The percentage of students in a cohort who are


either enrolled at their initial institution or transfer to a longer du-
ration program at the initial or a subsequent institution, calculated
annually for up to 200 percent of program length, disaggregated
by program of study selection.
• Transfer rate: The percentage of students in a cohort who transfer
into longer programs at the initial or subsequent institution(s), for
up to 200 percent of program length, disaggregated by program of
study selection.

Two additional changes to IPEDS data collection would be necessary


to obtain the data needed for the proposed indicators. First, all of the
measures of student progress and credentials outlined above would have to
be disaggregated by Pell grant status and disability status. As noted above,
institutions currently provide IPEDS data on completions disaggregated by
gender and race and ethnicity, so this change will require institutions to
make additional calculations using their administrative records. Second,
data collected from institutions through the IPEDS Human Resources Sur-
vey would have to be expanded to include the department in which faculty
and graduate students teach. The Human Resources Survey, administered
each spring, asks institutions to report data on: employee status (full or part
time, faculty status); full-time instructional staff (academic rank, gender,
and contract length or teaching period); and total salary outlay and num-
ber of months covered, by academic rank and gender. However, the survey
does not ask for disciplinary department, which is needed to identify STEM
faculty and staff.
The proposed additional measures would capture and follow a wider
range of students intending to major in STEM than are currently included
in IPEDS, including first-time full-time, transfer full-time, first-time part-
time, and transfer part-time students. These additional measures would
not only support the proposed indicator system, but would also improve
the capacity of IPEDS, allowing policy makers and higher education lead-
ers to track students’ progress and completion generally, across all fields
or majors.
Different combinations of the proposed additional student measures
have been used in voluntary data-sharing programs among institutions
participating in higher education reform consortia, such as Achieving the
Dream, Completion by Design, and Complete College America, and by
some state higher education data systems (see Chapter 6). Of particular
importance to the proposed indicator system is the “program of study selec-
tion” measure used by Achieving the Dream. As noted above, this measure
assigns students into one of seven meta-majors, one of which is STEM,
based on their course-taking patterns in their first year of higher education.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

190 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

This measure permits STEM students to be identified early in their college


careers, even before they officially declare their intended majors.
If it is not possible to add the program-of-study-selection measure to
IPEDS, NCES could instead require institutions to use the data on program
of study selection they already report to NSLDS to disaggregate enroll-
ment, gateway course completion, retention rate, and transfer rate data by
Classification of Instructional Program codes. Since 2014, postsecondary
institutions have been required to report to NSLDS unit student record data
on students’ program of study selection for those programs with enrolled
students who receive Title IV aid.4 These reports do not capture students’
intentions as early as the measure of program of study selection outlined
above, and they miss the 30 percent of students who do not receive or apply
for some type of federal aid (Miller, 2016). Nonetheless, this requirement
would allow institutions to report more detailed information to IPEDS on
the programs in which their students are enrolled.

Expanding the Beginning Postsecondary Students Longitudinal Study


The second expansion necessary to support the proposed indicator
system under this option would be to extend and expand the NCES’s
Beginning Postsecondary Students Longitudinal Survey (BPS) to include
additional and larger cohorts. To date, BPS has followed cohorts of stu-
dents who entered postsecondary education in 1990, 1996, 2004, and
2012. Students in the BPS complete three surveys: one at the end of their
first academic year, and two others 3 years and 6 years after they entered
postsecondary education. The survey structure includes a section on charac-
teristics, which includes items on students’ educational experiences relevant
to the proposed indicators, and this section would easily allow addition of
a STEM-specific branch. Specifically, to support the proposed indicators in
this option, the NCES would need to: continue to add new BPS cohorts
every 6–8 years; expand the sample in each cohort to a size sufficient to
allow statistical analyses of gender, race and ethnicity, disability status, and
Pell grant (socioeconomic) status; and add STEM-specific questions to the
survey.

4 Title IV includes students with loans under the Federal Family Education Loan Program

or William D. Ford Federal Direct Loan (Direct Loan) Program, as well as students who have
Federal Perkins Loans, students who have received Federal Pell Grants, Teacher Education
Assistance for College and Higher Education Grants, Academic Competitiveness Grants or
Science and Math Access to Retain Talent Grants, and students on whose behalf parents bor-
rowed Parent PLUS loans.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

IMPLEMENTING THE INDICATOR SYSTEM 191

Renewing and Expanding the National Study of Postsecondary Faculty


The third expansion to current NCES data collection under this option
would be to renew and expand the National Study of Postsecondary Faculty
(NSOPF). The NSOPF was administered in 1992, 1998, and 2003 to 3,000
department chairs and 11,000 instructors at a wide range of postsecond-
ary institutions. This survey, which asked faculty about their employment
and activities, would have to be reinstated and modified in three ways to
support the proposed indicator system. First, it would be administered to a
sufficiently large sample of faculty to allow the meaningful disaggregation
of the sample into important dimensions of diversity, including race and
ethnicity, gender, part- or full-time status, and tenure- or non-tenure track
status. Second, it would be expanded to include graduate teaching assis-
tants, who provide a substantial amount of instruction for STEM students,
especially during the first 2 years of higher education. Third, it would be
modified to include questions on instructors’ use of evidence-based STEM
educational practices, including course redesign, evidence-based teaching
strategies, and involvement in professional development. Although the
survey was administered to faculty from all disciplines, it was designed to
measure teaching activities that are discipline specific. Hence, adding ques-
tions specifically about evidence-based STEM teaching strategies is possible
in the NSOPF framework.
Revising and expanding the NSOPF and IPEDS as proposed under this
option would likely increase the data collection burden on institutions. For
example, a group of 18 public and private 2-year and 4-year institutions
joined in the Voluntary Institutional Metrics Project to develop meaningful
measures of the quality and outcomes of higher education (HCM Strate-
gists, 2013). The participating institutions shared their student unit record
data, including all full-time, part-time, and transfer students, and analyzed
the data to report on measures similar to those that would be added to
IPEDS in this option.
The participating institutions reported that the burden of analyzing
student records to report on the measures was substantial. Following 4-year
students for 200 percent of expected time to completion (8 years) from
their initial enrollment sometimes required the analysis of millions of re-
cords over an extended period of time during which there were sometimes
changes in records systems and processes. In addition, institutions lacked
data on students who transferred to other institutions, especially if the
other institution was located in another state. However, if this option was
adopted and institutions had difficulty obtaining data on students who
transfer to another institution, they might be able to obtain the data from
NSC (see Chapter 6). Nevertheless, the additional burden on institutions
to report data on these measures is undeniable. This challenge points to a

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

192 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

benefit of the first option: if a national student unit record data system is
created, the burden of calculating the proposed indicators related to STEM
students’ progress and completion would fall on NCES rather than on
institutions.

OPTION 3: COMBINE EXISTING DATA


FROM NONFEDERAL SOURCES
The third option, which could be undertaken by federal agencies or
other organizations (e.g., a higher education association), would take ad-
vantage of existing data sources and require little or no federal investment.
This option has the potential to provide limited support for the committee’s
proposed indicator system by combining data from states and voluntary
multi-institution education reform consortia to create a national picture
of STEM undergraduate education. As noted above, many 2-year and
4-year institutions voluntarily share unit record data on cohorts of students
through state data warehouses and/or with one or more education reform
consortia. They report on student progress using measures similar to those
identified at the beginning of this chapter (refer to Box 7-1). Institutions
participating in these consortia and state data sharing also report IPEDS
data to NCES and data on students receiving financial aid to NSLDS; these
data are relevant for the proposed indicators.
In this option, the federal government or a private foundation would
seek to tap these existing data resources by commissioning research on the
feasibility of creating a nationally representative sample of postsecondary
institutions that are already reporting the measures outlined in Option 2.
Specifically, the Department of Education, NSF, or a private foundation
would commission research to first identify institutions that are already
submitting student unit record data to a consortia or their state data sys-
tems. One starting point would be the map listing voluntary data collection
efforts in each state and territory.5
The research would then examine whether a representative national
sample of student unit record data could be derived from these institutions.
The committee’s preliminary review of this information suggests that such
data sharing is most frequent among public 4-year institutions, moderate
among 2-year institutions, less frequent among private nonprofit institu-
tions, and rare among private for-profit institutions. As noted in Chapter 6,
state unit record data systems have limited or no coverage of private non-
profit institutions. However, statistical weighting methods could be used to
correct for this gap and represent all institution types. With stratification

5 See the Institution for Higher Education Policy website: http://www.ihep.org/postsecdata/

mapping-data-landscape [August 2017].

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

IMPLEMENTING THE INDICATOR SYSTEM 193

and weighting, a small sample could provide nationally representative


student unit record data. Because the sample would be comprised of in-
stitutions already engaged in voluntary data collection and sharing, these
institutions might welcome the opportunity to share and analyze their data
to support the proposed indicators. To encourage participation and reduce
the burden of additional data collection, federal agencies or foundations
might offer to reimburse institutions for the costs of additional staff time
required to share and analyze existing student unit record data and gather
additional data.
Assuming that a nationally representative sample could be assembled,
one potential limitation is that Completion by Design (CBD) is the only
reform consortium using the critical measure of program of study selec-
tion (Engle, 2016), which allows STEM students to be identified before
they officially declare their majors. Because CBD includes only about 200
2-year institutions in Ohio, North Carolina, and Florida, few institutions
in the proposed national sample are currently calculating this measure.
To address this limitation and identify students in STEM programs, the
Department of Education, NSF, or another entity (e.g., a higher education
association) might require or request institutions in the national sample to
either analyze their administrative records to calculate the program of study
selection measure to identify STEM students or rely on the Classification of
Instructional Program codes and level of credential data they already report
to NSLDS. As noted above, federal or foundation funding for institutions
might encourage compliance with these requirements or requests.
In this option, the survey data needed would be collected through a
combination of existing proprietary surveys of students and faculty. As
discussed in Chapter 6, several long-standing national surveys owned and
administered by universities or nonprofit organizations could be used to
collect data for the indicators. These include the National Survey of S­ tudent
Engagement (NSSE), the Community College Survey of Student Engage-
ment (CCSSE), and the Freshman Survey, College Senior Survey, and Fac-
ulty Survey of the Higher Education Research Institute.
In this option, the Department of Education, NSF, or other entity
would contract with these survey providers to extend their surveys and
administer them to nationally representative samples of institutions and
students. First, the federal government or other entity would commission
the survey organizations to develop short survey modules for both students
and faculty, designed to elicit detailed information on evidence-based STEM
educational practices and other elements of the committee’s proposed indi-
cators. Fortunately, most of these surveys are designed to include shorter,
customized modules of questions, increasing the feasibility of this approach.
Second, the survey organizations would be commissioned to administer
the extended surveys to a nationally representative sample of 2-year and

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

194 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

4-year institutions and of STEM students. Third, given the decline in survey
response rates, the survey organizations would be provided with support
for incentives or other mechanisms to boost response rates. Table 7-3 sum-
marizes how the indicators could be supported in this option.

TABLE 7-3  Data for Indicators in Option 3


Objective Indicators Data Source
1.1 Use of evidence-based 1.1.1 Use of evidence-based Revised and expanded
STEM educational practices educational practices in proprietary surveys to include
both in and outside of course development and a nationally representative
classrooms. delivery sample of all types of 2-year
and 4-year institutions

1.2.1 Use of evidence-based Same as above


STEM educational practices
outside the classroom

1.2 Existence and use of 1.2.1 Extent of instructors’ Revised and expanded
supports that help STEM involvement in professional proprietary surveys to include
use evidence-based STEM development a nationally representative
learning experiences sample of all types of 2-year
and 4-year institutions

1.2.2 Availability of support Same as above


or incentives for evidence-
based course development or
course redesign

1.3 An institutional climate 1.3.1 Use of valid measures Revised and expanded
that values undergraduate of teaching effectiveness proprietary surveys to include
STEM instruction a nationally representative
sample of all types of 2-year
and 4-year institutions

1.3.2 Consideration of Same as above


evidence-based teaching
in personnel decisions by
departments and institutions

1.4 Continuous No indicators: see


improvement in STEM “Challenges of Measuring
teaching and learning Continuous Improvement”
in Chapter 3.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

IMPLEMENTING THE INDICATOR SYSTEM 195

TABLE 7-3 Continued
Objective Indicators Data Source
2.3 Representational 2.3.1 Diversity of STEM Nationally representative
diversity among STEM instructors in comparison sample of institutions drawn
instructors with diversity of STEM from appropriate voluntary
graduate degree holders reform initiatives

2.3.2 Diversity of STEM Same as above


graduate student instructors
in comparison with diversity
of STEM graduate students

2.4 Inclusive environments 2.4.1 Students pursuing Revised and expanded


in institutions and STEM STEM credentials feel proprietary surveys to include
departments included and supported in a nationally representative
their academic programs and sample of all types of 2-year
departments and 4-year institutions

2.4.2 Faculty teaching Same as above


courses in STEM disciplines
feel supported and included
in their departments

2.4.3 Institutional practices Same as above


that are culturally responsive,
inclusive, and consistent
across the institution

3.1 Foundational 3.1.1 Completion of Nationally representative


preparation for STEM foundational courses, sample of institutions drawn
for all students including developmental from appropriate voluntary
education courses, to ensure reform initiatives
STEM program readiness

3.2 Successful navigation 3.2.1 Retention in STEM Nationally representative


into and through STEM programs, course to course sample of institutions drawn
programs of study and year to year from appropriate voluntary
reform initiatives

3.2.2 Transfers from 2-year Same as above


to 4-year STEM programs in
comparison with transfers to
all 4-year programs

3.3 STEM credential 3.3.1 Number of students Nationally representative


attainment who attain STEM credentials sample of institutions drawn
over time (disaggregated from appropriate voluntary
by institution type, transfer reform initiatives
status, and demographic
characteristics)

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

196 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

CONCLUSIONS
CONCLUSION 6  Three options would provide the data needed for
the proposed indicator system:

1. Create a national student unit record data system, supplemented


with expanded surveys of students and instructors (Option 1).
2. Expand current federal institutional surveys, supplemented with
expanded surveys of students and instructors (Option 2).
3. Develop a nationally representative sample of student unit record
data, supplemented with student and instructor data from propri-
etary survey organizations (Option 3).

Option 1 would provide the most accurate, complete, and useful data
to implement the proposed indicators of students’ progress through STEM
education. As noted above, legislation has been introduced in Congress
to repeal the current ban on a student unit record data system and direct
NCES to create it. Although creating a national student unit record data
system would require investment of federal resources, the system would
provide valuable information to policy makers about the status and quality
of undergraduate education generally, not only in STEM fields. Institutions
would be required to share their student unit record data with the federal
government, but they would not be required to gather any additional data
or make any additional calculations beyond what they already provide
to IPEDS. This option for implementing the indicator system would also
require regular surveys of students and faculty for data not covered by a
student unit record data system.
Option 2 would take advantage of the well-developed system of insti-
tutional surveys that NCES uses to obtain IPEDS data annually from the
vast majority of 2-year and 4-year institutions. Under this option, NCES
would add to these surveys some of the new measures of student progress
developed by higher education reform consortia, which include part-time
and transfer students. Some of the measures are closely related to the
proposed indicators. Like the first option, this option would also require
investment of federal resources, but it would draw on the strengths of the
well-established system of institutional reporting for IPEDS. In comparison
with Option 1, this option would increase institutions’ burden for IPEDS
reporting, requiring them to calculate additional measures based on their
internal student unit record data. The additional measures would provide
much of the student data needed for the indicator system, but the system
would also require data from regular surveys of students and faculty.
Option 3 could be carried out by the federal government or another
entity (e.g., a higher education association). It would take advantage of

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

IMPLEMENTING THE INDICATOR SYSTEM 197

the rapid growth of higher education data collection and analysis by state
higher education systems and education reform consortia across the country
and require little or no federal investment. As noted above, some of these
new measures of student progress are similar to the committee’s indicators.
As in option 1 and 2, additional data from surveys would be needed to
support the indicators.

Research, Evaluation, and Updating of the Proposed Indicator System


Many of the indicators proposed by the committee represent new con-
ceptions of key elements of undergraduate STEM education to be moni-
tored over time. Some indicators require research as the first step toward
developing clear definitions and identifying the best measurement methods,
prior to beginning data collection and implementing the indicator system
(refer to Table 6-2). Once the system has been implemented and the indica-
tors are in use, the committee suggests that the federal government conduct
or commission an evaluation study to ensure that the indicators measure
what they are intended to measure. 
In addition, ongoing research may identify new factors related to the
quality of undergraduate STEM education beyond those included in the
proposed objectives and indicators. For example, there is promising evi-
dence that three psychological competencies are related to students’ persis-
tence and academic success (National Academies of Sciences, Engineering,
and Medicine, 2017): (1) a sense of belonging on campus; (2) utility value
(recognizing the value and relevance of academic subjects to one’s own life);
and (3) a growth mindset (the belief that one’s intelligence is not fixed but
can grow). Given the current lack of common definitions and high-quality
assessments of these and other competencies (e.g., interest), the committee
did not propose any objectives or indicators related to them. In the future,
however, as further research evidence emerges, it may be appropriate to add
objectives and indicators of these psychological competencies.
Furthermore, given that the structure of undergraduate education con-
tinues to evolve in response to changing student demographics, funding
sources, educational technologies, and the growth of new providers, it may
be a challenge to ensure that the proposed STEM indicators remain relevant
and informative over time.
A number of trends in this evolution are clear. First, entering students
will continue to come from increasingly diverse socioeconomic and ethnic
backgrounds, with different work experiences, and at different stages of
life. The committee’s proposed options concerning a national unit record
student data system, revising IPEDS surveys, or using data from states or
voluntary data initiatives are designed to measure the progress of these
more diverse student groups.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

198 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Second, an increasing number of students will earn STEM credentials


by nontraditional educational pathways. Already, various credentials are
provided by massive open online courses, companies (STEM certificates
and badges), and competency-based educational programs. To date, most
higher education institutions to do not award credit toward STEM degrees
for these courses and credentials, but this may change in the future. As new
approaches emerge, it will be important to update the indicator system to
capture students’ changing trajectories toward changing STEM credentials.
Finally, with the spread of learning management systems across post­
secondary education, institutions will have access to a different kind of data
on student learning and faculty activities. These data include student scores
on online assessments, student work products, faculty assignments, syllabi,
and a host of behavioral information about how students and faculty are
working. Combining such information with more traditional information,
such as student grades and course-taking patterns, using increasingly sophis-
ticated data analytic techniques will allow new approaches to monitoring
student progress and achievement. The Signals Project at Purdue University
(Sclater and Peasgood, 2016), the use of analytics to study virtual learning
environments at the University of Maryland, B ­ altimore County (University
of Maryland, Baltimore County, 2017), and the development and distribu-
tion of predictive models for students’ success at Marist College (Marist
College, 2017) are part of an emerging international movement to use data
analytics to develop more accurate predictive models of student grades, re-
tention, persistence, and graduation. This work will expand and grow more
sophisticated over the next decade, and as it does, features may emerge
that will allow for more easily obtained and more accurate measures for
monitoring some of the proposed indicators.
These and other developments in undergraduate education imply that
in the coming years, it will be important to review, and revise as necessary,
the committee’s proposed STEM indicators and the data and methods for
measuring them.

A Note of Caution
The proposed indicator system would create a picture of the cur-
rent status of undergraduate STEM education and allow policy makers to
monitor change over time, including movement toward the three goals that
underlie the indicator system. Although individual institutions or consortia
of institutions may wish to adopt some or all of these indicators to monitor
their own STEM educational programs, the indicator system is not intended
to support ranking systems or inter-institutional comparisons. Many of the
indicators are influenced by the socioeconomic status, parental education,
and high school preparation of potential STEM students, long before these

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

IMPLEMENTING THE INDICATOR SYSTEM 199

students begin postsecondary education. Thus, an individual institution’s


performance on these indicators could be more strongly influenced by the
background characteristics of its entering students than by factors that are
under the institution’s control. Moreover, the indicator system is designed
to capture the increasing number of students who pursue STEM credentials
while attending multiple postsecondary institutions. It would, therefore, be
impossible to fairly apportion the credit that a particular institution should
receive for such students on many of these indicators. For these reasons,
it would be inappropriate to use these indicators to rank or compare the
performance of different postsecondary institutions.

REFERENCES
Cunningham, A.F., and Milam, J. (2005). Feasibility of a Student Unit Record System Within
the Integrated Postsecondary Education Data System. (NCES 2005–160). U.S. Depart-
ment of Education, National Center for Education Statistics. Washington, DC: U.S.
Government Printing Office.
Engle, J. (2016). Answering the Call: Institutions and States Lead the Way Toward Better
Measures of Postsecondary Performance. Seattle, WA: Bill & Melinda Gates Founda-
tion. Available: http://postsecondary.gatesfoundation.org/wp-content/uploads/2016/02/
AnsweringtheCall.pdf [June 2017].
Executive Office of the President of the United States. (2017). Using Federal Data to Measure
and Improve the Performance of Institutions of Higher Education. Washington, DC:
Author. Available: https://collegescorecard.ed.gov/assets/UsingFederalDataToMeasure
AndImprovePerformance.pdf [September 2017].
HCM Strategists. (2013). The Voluntary Institutional Metrics Project: A Better Higher Educa-
tion Data and Information Framework for Informing Policy. Washington, DC: Author.
Available: https://www.luminafoundation.org/resources/a-better-higher-education-data-
and-information-framework-for-informing-policy [July 2017].
Janice, A., and Voight, M. (2016). Toward Convergence: A Technical Guide for the Postsec-
ondary Metrics Framework. Washington, DC: The Institute for Higher Education Policy.
Available: http://www.ihep.org/research/publications/toward-convergence-technical-
guide-postsecondary-metrics-framework [July 2017].
Marist College. (2017). Learning Analytics Project Wins Innovation Award. Available: http://
www.marist.edu/publicaffairs/eduventuresaward2015.html [July 2017].
Miller, B. (2016). Building a Student-level Data System. Washington, DC: Institute for Higher
Education Policy. Available: http://www.ihep.org/sites/default/files/uploads/postsecdata/
docs/resources/building_a_student-level_data_system.pdf [June 2017].
National Academies of Sciences, Engineering, and Medicine. (2017). Supporting Students’
College Success: The Role of Assessment of Intrapersonal and Interpersonal Com-
petencies. Washington, DC: The National Academies Press. Available: https://www.
nap.edu/catalog/24697/supporting-students-college-success-the-role-of-assessment-of-
intrapersonal [October 2017].
Sclater, N., and Peasgood, A. (2016). Learning Analytics in Higher Education: A Review
of UK and International Practice. Available: https://www.jisc.ac.uk/reports/learning-
analytics-in-higher-education [July 2017].
University of Maryland, Baltimore County. (2017). Division of Information Technology Ana-
lytics. Available: http://doit.umbc.edu/analytics [July 2017].

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Appendix A

Public Comments on Draft Report


and Committee Response

I
n order to obtain broad input into its work, the committee publicly
released a draft report for comment in August 2016 after completing
Phase I of the study. This draft report was intended to elicit feedback
from the interested public in order to ensure that the committee was com-
prehensively covering the relevant terrain and also proposing reasonable
goals and objectives that could be monitored over time without imposing
undue data collection burdens. The interim report was available on the
committee’s website, with a 7-week period for comment.
Public comments were sought to obtain perspectives and insights from
researchers and practitioners knowledgeable about undergraduate STEM
reform and education statistics.
The public comment draft included a conceptual framework for the
indicator system, identified goals and objectives for improving undergradu-
ate STEM education at both 2-year and 4-year institutions, and reviewed
existing systems for monitoring undergraduate STEM education: Table A-1
shows the draft goals and objectives on which the committee sought com-
ment. Based on the committee’s consideration of what information from the
public would be most useful for the second phase of the study, the report
included a series of questions for readers to respond to, as follows:

1. The committee proposes five goals to improve the quality of under­


graduate STEM education (see Chapter 2). Is this the right set of
goals? Should any be deleted or other goals added? Why do you
suggest this change?

201

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

202 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

2. The committee identifies 14 objectives around which national in-


dicators for undergraduate STEM education will be developed in
Phase II of the study (see Chapter 2). Is this the right set of objec-
tives? Should any be deleted or other objectives added? Why do
you suggest this change?
3. The committee discusses various data sources on undergraduate
STEM (see Chapter 3). Are these the right data sources? Should
any be deleted or other sources added? Why do you suggest this
change?
4. Are there larger issues related to measuring and improving qual-
ity in undergraduate STEM that are missing from the committee’s
proposed conceptual framework, goals, and objectives?
5. How and where, if at all, do you see the national indicators to be
developed in Phase II being used to improve undergraduate STEM?

Individuals and representatives of organizations were encouraged to


submit their responses to these questions through an online questionnaire
that was posted with the public comment draft. The committee received
32 comments through the website questionnaire and 2 comments through
letters.
To supplement the input from the online questionnaire, the commit-
tee convened a day-long public meeting in October 2016, which included
responses from invited individuals and institutions, as well as open micro-
phone sessions for all meeting participants. The meeting drew just over 100
people, 62 in person and 40 by the webcast.
Following the public meeting, the committee reviewed all of the feed-
back and identified possible revisions. The committee used the Phase II of
the study to revise its work in response to the concerns and suggestions it
had received, resulting in this final report. The rest of this appendix summa-
rizes the feedback and describes the steps taken to revise the initial concep-
tual framework and the preliminary review of data sources and monitoring
systems. Those revisions are reflected in this final report.

OVERARCHING ISSUES
Several themes emerged across all comments received, through the
website, letters, and at the October meeting:

• Role of Discipline-Based Education Research (DBER). Several


commenters ask that we consider the role of DBER in improving
the quality of undergraduate STEM. They asked the committee to
consider adding DBER to one of our objectives, and to consider it

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

APPENDIX A 203

as a potential indicator of the use of evidence-based educational


practices.
• Meaning of Goal 1. Several commenters raised concerns about the
wording of Goal 1, which was “Increase numbers of STEM ma-
jors,” and offered suggestions for other ways of approaching the
goal.
• Evidence-Based Practices. Commenters asked for a more thorough
explanation of the meaning of “evidence-based practices” and sug-
gested broader, more expansive use of the term.
• Expanded Definitions of Equity. Commenters praised the com-
mittee’s attention to diverse learners, but several asked that the
committee broaden this group to include students with learning
disabilities, first-generation college students, and other populations,
along with discussion of the ethical dimensions of diversity and
inclusion.
• Unit of Analysis. Commenters raised questions and offered sug-
gestions about the most appropriate unit of analysis for measuring
improvement in STEM. For example, some called for indicators
at the department or institutional level, while others expressed
concern that individual institutions would be held accountable for
national-level indicators of equity, diversity, and inclusion.
• Defining STEM and Related Terms. Commenters asked for ex-
panded definitions of STEM, STEM literacy, and STEM learning
to emphasize the role of the social sciences, the social and civic ap-
plication of STEM knowledge, the development of ethics, positive
attitudes, and “21st century” skills and to more closely integrate
STEM with the humanities.
• Future Indicators. A few commenters noted that the framework,
goals, and objectives are relevant to current undergraduate STEM
but may need to be updated in the future, as student populations
and higher education institutions change.
• Use of Existing Rubrics. Many commenters were concerned that
the draft did not give more prominence to the PULSE vision and
change reform initiative, which has developed rubrics to measure
progress toward some of the proposed objectives.
• Data Sources. A few commenters noted that the Science and Engi-
neering Indicators report is not an original data source. In addition,
several pointed to the PULSE rubrics as a potential data source.

COMMITTEE RESPONSE
In response to these comments, the Committee made several revisions
to the interim goals and objectives shown in Table A-1:

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

204 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

DBER  Although the committee decided not to identify DBER as a


specific objective or indicator, discussion of DBER was added to the chapter
on use of evidence-based educational practices (see Chapter 3).
Goal 1  The committee revised the wording of Goal 1 from “Increase
numbers of STEM majors” to “Ensure adequate numbers of STEM pro-
fessionals,” and added language clarifying that demand varies across the
different STEM disciplines (see Chapters 1 and 5).
Evidence-Based Practices The committee defined “evidence-based
practices” (see Chapter 1) and described them with review of the literature
and detailed examples (see Chapter 3).
Equity  The committee expanded its focus on equity to include students
with disabilities and first–generation college students (see Chapter 4).
Unit of Analysis  The committee clarified its focus on national indica-
tors, and the nation as a whole as its primary unit of analysis, as called for
in the study charge (see Chapter 1).
Definition of STEM  Considering comments about the lack of clarity
around STEM literacy, the committee discussed the recent report on sci-
ence literacy (National Academies of Sciences, Engineering, and Medicine,
2016b). Given that report’s findings about the difficulty of defining science
literacy, as well as the challenge of specifying a level of STEM literacy that
all students should master, the committee decided to drop STEM literacy as
a formal goal. However, the committee explains that its vision for under-
graduate education includes all students developing a basic understanding
of STEM concepts and skills (see Chapter 1).
While recognizing the value of the social sciences and the humanities
and the benefits of an integrated liberal arts education, the committee con-
cluded it needed to maintain its focus on undergraduate STEM education,
as required by the study charge. Additionally, in response to calls to address
the future of STEM education, this report discusses the growth of online
education and assessment, noting that as these technologies advance, new
indicators of STEM learning may be needed (see Chapters 1 and 7). Finally,
to clarify and focus the overarching goals, the committee decided to drop
the goal of continuous improvement, but added continuous improvement
as an objective within the goal of increasing mastery of STEM concepts
and skills by increasing students’ engagement in evidence-based educational
practices (see Chapter 3).
Existing Rubrics In response to many comments about PULSE, the
committee added more discussion of PULSE and other existing reform
initiatives (see Chapters 1 and 3).
Data Sources  In response to comments about data sources, the com-
mittee distinguished between original data sources and compilations of
statistics and data (see Chapter 6), clarified its focus on national-level in-
dicators (see Chapter 1), and considered a few specific PULSE rubrics that

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

APPENDIX A 205

have the potential to be used to gather information on a national basis for


the proposed indicators (see Chapter 3).

Copyright National Academy of Sciences. All rights reserved.


TABLE A-1  Goals and Objectives in the Draft for Public Comment
206

Goal Framework Objective Strategies to Advance Objective and Possible Measures


1. Increase Input 1.1 Multiple pathways into and • Variety of entry/exit points
Numbers through STEM programs • Guided pathways (map of courses)
• Inter-institution articulations
• Preparation support
• Developmental education approach
• Bridge programs
Process 1.2 High retention of students in • Co-curricular supports for completion of core foundational
STEM disciplines beyond core courses
foundational courses • Core course/unit completion
• Advising
• Mentoring
• Living/learning communities
Indicators for Monitoring Undergraduate STEM Education

• Career development/advising
• Evidence-based instructional practices
Process 1.3 Appropriate general education • Core proficiency in math, language and communication, and
experiences for STEM students’ digital fluency/computational thinking
foundational preparation

Outcome 1.4 STEM credential attainment • Variety of credentials


• Outcome data
• Change in attainment numbers over time
2. Ensure Process 2.1 Equity of access to high- • Recruitment
Diversity quality undergraduate STEM • Admissions processes and support
education • Bridge programs

Copyright National Academy of Sciences. All rights reserved.


• Preparatory (developmental education) courses
Outcome 2.2 Representational equity in • Variety of credentials
STEM credential attainment • Outcome data
• Change in attainment numbers over time

3. Evidence- Process 3.1 Use of evidence-based STEM • Active learning instructional strategies
Based (EB) educational practices both in and • Formative assessment
Education out of classrooms • Advising and mentoring
• Co-curricular opportunities/experiences
• Internships
• Engage in relevant interdisciplinary big questions
• Authentic practice
• Backward design of courses and programs
• Aligned assessments
• Data driven course and program improvements
Process 3.2 Equitable access to evidence- • Mentoring and advising
based STEM educational practices • Diversity of instructional staff
both in and out of classrooms • Numbers of students experiencing evidence-based practices
Indicators for Monitoring Undergraduate STEM Education

Environment 3.3 Support for instructors to use • Infrastructure


evidence-based teaching methods • Professional development
• Recognition
• Adequate time
Environment 3.4 Institutional culture that • Happens at all institutional levels
values undergraduate STEM • Valid robust evidence-based teaching evaluation system
education • Teaching and learning in mission and official documents
• Reward system aligned with instruction
Outcome 3.5 STEM learning for students in • Happens at course and program levels
STEM educational experiences • Ensure adequate depth in STEM disciplinary skills and
knowledge (competencies)
• All students will gain in the ability to be lifelong, independent
and resourceful learners

Copyright National Academy of Sciences. All rights reserved.


• Adaptable to the demands of new projects, jobs or careers
207

continued
TABLE A-1 Continued
208

Goal Framework Objective Strategies to Advance Objective and Possible Measures


4. STEM Outcome 4.1 Access to foundational STEM • Number of STEM courses completed by students
Literacy experiences for all students, to • Degree of achievement of STEM literacy
develop STEM literacy • Change in attainment numbers over time
Outcome 4.2 Representational equity in • Number of STEM courses completed by students
core STEM literacy outcomes • Degree of achievement of STEM literacy
• Change in attainment numbers over time

5. Ensure Environment 1.1 Ongoing data-driven • Data-driven institutional learning


Continuous improvement • Data systems that allow better tracking of students across
Improvement multiple higher education institutions
Indicators for Monitoring Undergraduate STEM Education

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Appendix B

Possible Formulas for Calculating


Selected Indicators

T
his appendix presents measurement approaches and formulas that
could potentially be used to calculate some of the committee’s pro-
posed indicators. Given the complexity of the phenomena the indica-
tors are designed to measure and the limited data available, the committee
does not propose an approach or formula for every indicator. Table B-1
lists only selected indicators.

209

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

210 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

TABLE B-1  Possible Formulas for Selected Indicators


Indicator Possible Formula
1.1.1 Use of evidence-based practices in Percentage of faculty who report using
course development and delivery instructional practices supported by research
as likely to foster student learning

1.1.2 Use of evidence-based practices Percentage of students in mentored research


outside the classroom experiences and percentage of students
involved in service learning

1.2.1 Extent of instructors’ involvement in Percentage of instructors who annually report


professional development more than 10 hours of formal teaching-
related professional development of any
kind; dollars spent annually by an institution
on faculty development and instructional
support, per instructor

1.3.1 Use of valid measures of teaching Percentage of departments that use validated
effectiveness measures other than typical student
evaluations to measure instructional quality
(such as validated observation protocols,
teaching portfolios, validated self-report
tools)

1.3.2 Consideration of evidence-based Percentage of departments at an institution


teaching in personnel decisions by that explicitly consider use of evidence-based
departments and institutions teaching in decisions for merit, retention,
promotion

2.1.1 Institutional structures, policies, and Curricular practices that strengthen levels of
practices that strengthen STEM readiness STEM readiness for entering and enrolled
for entering and enrolled college students students (e.g., accelerated developmental
mathematics course sequences);  assessment
and placement practices that strengthen
levels of STEM readiness for entering and
enrolled students (e.g., multiple measures for
mathematics placement); academic program
structures that promote coherence in STEM
course taking and timely degree completion
(e.g., guided pathways); institutional
structures that enhance access to STEM
courses (e.g., dual enrollment)

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

APPENDIX B 211

TABLE B-1 Continued
Indicator Possible Formula
2.1.2 Entrance to and persistence in Percentage of entering college students
STEM educational programs that state an intention to major in STEM,
disaggregated by race and ethnicity, gender,
socioeconomic status, first-generation status,
and ability status; persistence rates for STEM
aspirants, disaggregated by race and ethnicity,
gender, socioeconomic status, first-generation
status, and ability status

2.1.3 Equitable student participation Proportion of students who participated in


in evidence-based STEM educational evidence-based educational experiences inside
practices or outside the classroom, disaggregated by
race and ethnicity, gender, socioeconomic
status, first-generation status, and ability
status

2.2.1 Diversity of STEM degree and Ratio of the share of STEM undergraduate
certificate earners in comparison with the degrees earned to the share of all
diversity of degree and certificate earners undergraduate degrees earned, disaggregated
in all fields by race and ethnicity, gender, socioeconomic
status, first-generation status, and ability
status

2.2.2 Diversity of students who transfer Ratio of the share of 2-year college transfer
from 2-year- to 4-year STEM programs in students entering 4-year STEM degree
comparison with diversity of students in programs to the share of all 2-year college
2-year STEM programs students in STEM programs, disaggregated
by race and ethnicity, gender, socioeconomic
status, first-generation status, and ability
status

2.2.3 Time to degree for students in STEM 3-year graduation rates for students in
academic programs 2-year STEM programs, disaggregated by
race and ethnicity, gender, socioeconomic
status, first-generation status, and ability
status; 4-year and 6-year graduation rates
for students in 4-year STEM programs,
disaggregated by race and ethnicity, gender,
socioeconomic status, first-generation status,
and ability status; average time to degree of
students earning bachelor degrees; average
time to degree of students earning associate
degrees; average academic terms (semesters
or quarters) to degree of students earning
bachelor degrees; average academic terms
(semesters or quarters) to degree of students
earning associate degrees
continued

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

212 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

TABLE B-1 Continued
Indicator Possible Formula
2.3.1 Diversity of STEM instructors Ratio of the share of STEM instructors to the
in comparison with diversity of STEM share of all STEM graduate degree holders,
graduate degree holders disaggregated by race and ethnicity, gender,
socioeconomic status, first-generation status,
and ability status and by STEM discipline and
institutional type

2.3.2 Diversity of STEM graduate student Ratio of the share of STEM teaching
instructors in comparison with diversity of assistants to the share of all STEM graduate
STEM graduate students students, disaggregated by race and ethnicity,
gender, socioeconomic status, first-generation
status, and ability status and by STEM
discipline and institutional type

Average sense of belonging that STEM


students feel toward their college or
university, disaggregated by race and
ethnicity, gender, socioeconomic status,
first-generation status, and ability status and
by STEM discipline and institutional type;
overall sense among STEM students that
faculty are approachable, disaggregated by
race and ethnicity, gender, socioeconomic,
first-generation status, and ability status and
by STEM discipline and institutional type;
STEM students’ overall sense of support from
peers, disaggregated by race and ethnicity,
gender, socioeconomic status, first-generation
status, and ability status and by STEM
discipline and institutional type; annual
proportion of students unable to enroll in
the courses they need to matriculate in their
degree programs, disaggregated by race and
ethnicity, gender, socioeconomic status, first-
generation status, and ability status and by
STEM discipline and institutional type

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

APPENDIX B 213

TABLE B-1 Continued
Indicator Possible Formula
2.4.2. Instructors who teach courses in Proportion of STEM faculty expressing
STEM disciplines feel supported and satisfaction with the collegiality among
included in their departments faculty in their departments, disaggregated
by race and ethnicity, gender, rank, and
employment status and by STEM discipline
and institutional type; proportion of
STEM faculty experiencing stress due to
discrimination, disaggregated by race and
ethnicity, gender, rank, and employment
status and by STEM discipline and
institutional type

2.4.3 Institutional practices are culturally Departmental use of culturally responsive


responsive, inclusive, and consistent across instructional practices; institutional use of
the institution faculty search and hiring practices known to
effectively diversify STEM faculty

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Appendix C

Agendas: Workshop and


Public Comment Meeting

Committee on Developing Indicators for Undergraduate STEM


Meeting: February 22, 2016
NAS Building, 2101 Constitution Avenue, NW 20418
Room 120

AGENDA
Open Sessions: Workshop on Developing Indicators for Undergraduate
STEM

8:30 am Informal Introductions (breakfast available)

9:00 Welcome and Introduction to the Workshop


Heidi Schweingruber, Director, Board on Science
Education
Mark Rosenberg, President, Florida International
University

Session 1: Approaches to Measuring Quality in Higher
9:30–10:45 
Education
Moderator: Mark Rosenberg, Committee Chair

215

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

216 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

   9:30–9:50 Accreditation Process for Measuring Quality


 Alexei Matveev, Southern Association of Colleges and
Schools

   9:50–10:10 USC Center for Urban Education’s Equity Scorecard


Alicia Dowd, Pennsylvania State University

  10:10–10:30 Questions from the Committee and the Audience

10:30–10:45 Break

10:45 am– Session 2: Federal Data and Indicator Systems


12:15 pm Moderator: Kaye Husbands Fealing, Georgia Institute
of Technology

  10:45–11:15 Overview of the NSF Science and Engineering


Indicators
 Beethika Khan, National Center for Science and
Engineering Statistics

  11:15–11:45 The STEM Education Resource and Revisiting the


STEM Workforce
Matthew Wilson, National Science Board

  11:45–12:15 Questions from the Committee and the Audience

12:15–1:15 Lunch

 1:15–2:15 Session 3: Other Indicator Systems and Research


Projects
Moderator: Heather Belmont, Miami Dade College

   1:15–1:35 The California State University STEM Dashboard


 Jeff Gold, California State University Office of the
Chancellor

   1:35–1:55 Improving Introductory Calculus, Gateway to STEM


Chris Rasmussen, San Diego State University

   1:55–2:20 Questions, Discussion


Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

APPENDIX C 217

Reflections on the Workshop-Committee Perspectives


2:20–2:45 
(5–7 minutes)
Mark Connolly, University of Wisconsin
Deborah Santiago, Excelencia in Education
Yu Xie, Princeton University

2:45 Adjourn Workshop

Committee on Developing Indicators for Undergraduate STEM Education


Meeting: October 6, 2016
NAS Building, 2101 Constitution Avenue, NW 20418
Auditorium

AGENDA
Open Sessions: Meeting for Public Comment

  9:00 am Welcoming Remarks and Meeting Goals


 Heidi Schweingruber, Director, Board on Science
Education
Mark B. Rosenberg, Florida International University,
Chair, Committee on Developing Indicators for
Undergraduate STEM Education

 9:10 Reflections on the Indicators Study and the Related


Report, Monitoring Progress Toward Successful K–12
STEM Education
Adam Gamoran, William T. Grant Foundation, and
Chair, Board on Science Education
Susan Singer, Rollins College
• Adam Gamoran presentation (15 minutes)
• Susan Singer presentation (15 minutes)
• Discussion, Q&A with the committee only (15
minutes)

 9:55 Community College Perspectives on the Draft


Annette Parker, South Central College, Minnesota
• Presentation (20 minutes)
• Discussion, Q&A with the committee only (15
minutes)

10:30 Break

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

218 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Reflections on the Draft from STEM Reform


10:45 
Initiatives
Linda Slakey, University of Massachusetts, Amherst,
and Convener, Coalition for Reform of Undergraduate
STEM Education (by WebEx)
Kacy Redd, Association of Public and Land-Grant
Universities
• Linda Slakey presentation (15 minutes)
• Kacy Redd presentation (15 minutes)
• Discussion, Q&A with the committee only (15
minutes)

11:30 Open Session for Public Comments and Questions



12:15 pm Lunch

 1:00 Reflections on the Report from an Institutional


Perspective
 Susan Ambrose, Northeastern University, Boston,
Massachusetts
• Presentation (20 minutes)
• Discussion, Q&A with the committee only (10
minutes)

 1:30  Implications of the Draft for Improving Teaching and


Learning
Jillian Kinzie, Indiana University
• Presentation (15 minutes)
• Discussion, Q&A with the committee only (15
minutes)

  2:00 Open Session for Public Comments and Questions

 2:45 Break

 3:00 Implications of the Draft for Measuring and


Improving Equity in Undergraduate STEM
 Mica Estrada, University of California, San Francisco
(by WebEx)
Deborah Santiago, Excelencia in Education
• Presentations (15 minutes each)
• Discussion, Q&A with the committee only (15
minutes)

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

APPENDIX C 219

 3:45 Reflections on the Day


Lee Zia, NSF Division of Undergraduate Education
• Presentation (10 minutes)
• Discussion, Q&A with the audience only (10
minutes)

  4:05 Final Remarks


 Mark B. Rosenberg, Florida International University,
Chair, Committee on Developing Indicators for
Undergraduate STEM Education

  4:15 Adjourn Public Meeting

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Appendix D

Biographical Sketches of
Committee Members and Staff

MARK B. ROSENBERG (Chair) is the president of Florida International


University (FIU), where he has overseen an expansion of FIU’s investments
in STEM education, including partnerships with local schools, 2-year col-
leges, and community organizations. Previously, he served as chancellor for
the board of governors of the State University System of Florida. As chan-
cellor, he led the system’s strategic development and financial planning and
policy initiatives, working closely with the governor and legislature. Prior to
this position, he served as provost and executive vice president for academic
affairs at FIU. As a political scientist, he specializes in Latin America. He
is a member of the Council on Foreign Relations, chair of the Coalition
of Urban-Serving Universities, and has served as a consultant to the U.S.
Department of State and the U.S. Agency for International Development.
He has a B.A. in political science from Miami University and an M.A. and
a Ph.D. in political science from the University of Pittsburgh.

HEATHER BELMONT is dean of the School of Science at Miami Dade


College (MDC), where she has also served as a faculty member, chair of the
Biology, Health/Wellness, and Funeral Services Departments, and director
of the Biotechnology Program. At the School of Science, she has established
an intrusive, in-house science advisement system, an extensive peer-led,
team-learning network, and an undergraduate research program at several
of MDC’s campuses. Previously, whe worked for Sunol Molecular Corpora-
tion and Altor Bioscience Corporation, where she conducted research on
therapeutic antiviral and anticancer biologics. She serves on multiple boards
and is a leadership fellow of Partnership for Undergraduate Life Science

221

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

222 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

Education. She has a B.A. from Ithaca College and a Ph.D. in neuroscience
from the University of Miami.

CHARLES BLAICH is the director of the Center of Inquiry and the Higher
Education Data Sharing Consortium at Wabash College. He collaborated
with researchers at other universities to design and implement the Wabash
National Study of Liberal Arts Education, which has involved 49 colleges
and universities as participants in the longitudinal research project on the
practices and conditions that support student learning. In his academic
research on the auditory communication in zebra finches, undergraduate
students were his collaborators as well as coauthors on all of his papers and
conference presentations. He serves on an advisory panel for a project in
the California State University system to enhance access to STEM education
for underrepresented minorities. He has received teaching awards from the
University of Connecticut, Eastern Illinois University, and Wabash College.
He has a B.S. in psychology, an M.A. in experimental psychology, and a
Ph.D. in developmental psychology, all from the University of Connecticut.

MARK CONNOLLY is an associate research scientist with the Wisconsin


Center for Education Research at the University of Wisconsin–Madison. He
has served as principal investigator for two 5-year studies of postsecondary
STEM education, and he is a member of the research and evaluation team
for the Center for the Integration of Research, Teaching, and Learning. He
also serves as an evaluator-researcher on STEM faculty development studies
funded by the U.S. Department of Education, the National Science Foun-
dation, the National Insitutes of Health, and the Howard Hughes Medical
Insitute. His areas of study include postsecondary teaching and learning,
graduate education, academic careers, and STEM education reform. He
has a Ph.D. in higher education from Indiana University at Bloomington.

KENNE DIBNER (Deputy Study Director) is a program officer with the


Board on Science Education of the National Academies, where she cur-
rently directs a study of citizen science and previously directed a study of
science literacy. Prior to joining the National Academies, Kenne worked as
a research associate at Policy Studies Associates, Inc., where she conducted
evaluations of education policies and programs for government agencies,
foundations, and school districts. She also previously worked as a research
consultant with the Center on Education Policy. She has a B.A. in English
literature from Skidmore College and a Ph.D. in education policy from
Michigan State University.

STEPHEN DIRECTOR (NAE) is provost University Distinguished Profes-


sor emeritus at Northeastern University. He previously served as senior vice

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

APPENDIX D 223

president for academic affairs and senior advisor to the president. He also
held a number of academic positions at Drexel University, the University
of Michigan, the University of Florida, and Carnegie Mellon University. He
is a fellow of the Institute of Electrical and Electronics Engineers (IEEE)
and of the American Society of Engineering Education (ASEE). He has
received numerous awards for his research and educational contributions,
including the ASEE Benjamin Garver Lamme Award, the IEEE Millennium
Medal, the IEEE Education Medal, and the Aristotle Award from the Semi­
conductor Research Corporation. He is a member of the National Academy
of Engineeering. He has a B.S. from the State University of New York at
Stony Brook and an M.S. and a Ph.D. in electrical engineering from the
University of California, Berkeley.

KEVIN EAGAN is assistant professor in residence in the Department of


Education and managing director of the Higher Education Research In-
stitute (HERI) at the University of California, Los Angeles (UCLA). As
managing director, he coordinates HERI’s funded research projects and
oversees the development, administration, and analysis of the five national
surveys. He also serves as director of the Cooperative Institutional Re-
search Program, the longest-running and largest empirical study of higher
education in the country. His research interests include issues related to
undergraduate STEM education; contingent faculty; student retention; in-
stitutional contexts and structures of opportunity; survey validity and reli-
ability; and advanced quantitative methods. He has a B.S. in mathematics
from Greensboro College, an M.S. in higher education administration from
North Carolina State University, and a Ph.D. in higher education and orga-
nizational change from UCLA.

SUSAN ELROD is provost and executive vice chancellor for Academic


­Affairs at the University of Wisconsin–Whitewater (UW–W). At UW–W, she
has led the development of the new university strategic plan, which includes
a new strategic enrollment management plan and academic plan. Prior to
her appointment at UW–W, she served as interim provost and vice presi-
dent for Academic Affairs at California State University, Chico. She went
to Chico after serving as dean of the College of Science and Mathematics
at California State University, Fresno. She previously served as executive
director of Project Kaleidoscope (PKAL) at the Association of American
Colleges and Universities (AAC&U) in Washington, DC. During her tenure
at PKAL, she led several multicampus, national STEM education reform
initiatives that focused on interdisciplinary learning, sustainability, and
STEM student transfer success. She continues to serve as a senior scholar at
AAC&U and as an advisor or an investigator on several state and national
projects STEM education projects. She has a B.S. in biological science from

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

224 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

California State University at Chico and a Ph.D. in genetics from the Uni-
versity of California, Davis.

STUART FELDMAN leads Schmidt Sciences at Schmidt Philanthropies, a


foundation focusing on science, energy, and sustainability of the biosphere.
Previously, he was vice president of engineering at Google and held several
positions at IBM, including vice president for computer science in research,
vice president for internet technology, and director of IBM’s Institute for
Advanced Commerce. He is a former president of the Association for
Computing Machinery, as well as a member of the board of directors of
the Association to Advance Collegiate Schools of Business. He is a fellow
of several professional associations including the Institute of Electrical and
Electronics Engineers, the Association for Computing Machinery, and the
American Association for the Advancement of Science. He has a Ph.D. in
applied mathematics from the Massachusetts Institute of Technology, and
he received an honorary doctor of mathematics from the University of
Waterloo.

CHARLES HENDERSON is a professor of physics and of the Mallinson


Institute for Science Education at Western Michigan University, where he is
also codirector of the Center for Research on Instructional Change in Post-
secondary Education. Previously, he was a high school physics and chemis-
try teacher at the International School of Minnesota and taught physics at
Macalester College and at Anoka-Ramsey Community College. His current
research studies the teaching and learning of physics with a focus on the
development of theories and strategies for promoting change in the teaching
of STEM subjects. He has served as president of the Michigan section of the
American Association of Physics Teachers, chair of the American Associa-
tion of Physics Teachers Committee on Research in Physics Education, and
a member of the Physics Content Advisory Committee for the Michigan
Test for Teacher Certification. He has a B.A. in mathematics and physics
from Macalester College and an M.S. in physics and a Ph.D. in physics
education from the University of Minnesota.

MARGARET HILTON (Study Director) is a senior program officer of


the Board on Science Education of the National Academies, where she has
directed studies on the assessment of intrapersonal and interpersonal com-
petencies, the effectiveness of team science, an assessment of 21st century
skills, and high school science laboratories. She also participated in s­ tudies
on discipline-based education research and learning science through com-
puter games and simulations. Prior to joining the National Academies staff,
she was a policy analyst at the Congressional Office of Technology Assess-
ment, where she directed studies of workforce training, work reorganiza­

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

APPENDIX D 225

tion, and international competitiveness. She has a B.A. in geography from


the University of Michigan, an M.A. in regional planning from the Univer-
sity of North Carolina at Chapel Hill, and an M.A. in education and human
development from George Washington University.

KAYE HUSBANDS FEALING is chair of the School of Public Policy at the


Georgia Institute of Technology. She specializes in science and innovation
policy, the underrepresentation of women and minorities in STEM disci-
plines and occupations, and international trade policy impacts on industry
structure and firm behavior. Previously, she was a professor at the Center
for Science, Technology and Environmental Policy in the Humphrey School
of Public Affairs at the University of Minnesota and William Brough pro-
fessor of economics at Williams College. She also previously held positions
at the National Science Foundation, as the inaugural program director for
the Science of Science and Innovation Policy Program and an economics
program director. She is an elected fellow of the American Association for
the Advancement of Science, where she also serves on the executive board.
She is on the advisory council of the National Institute of General Medi-
cal Sciences, and the Steering Committee on Subnational Science Policy of
the Council of Canadian Academies. She has a B.A. in mathematics and
economics from the University of Pennsylvania and a Ph.D. in economics
from Harvard University.

LINDSEY MALCOM-PIQUEUX is the associate director for research and


policy at the Center for Urban Education at the University of Southern
California and a research associate professor in the Rossier School of
Education at the University of Southern California. Her work focuses on
the ways in which higher education policy, institutions, and practitioners
contribute to and/or reduce educational inequities experienced by minority
student populations. Her primary interest centers on equitable access and
outcomes for women and men of color in STEM fields. She is particularly
interested in understanding the ways in which community colleges and
minority-serving colleges and universities can promote equity in STEM for
historically underrepresented populations. She has a B.S. in planetary sci-
ence from the Massachusetts Institute of Technology, an M.S. in planetary
science from the California Institute of Technology, and a Ph.D. in urban
education with an emphasis on higher education from the University of
Southern California.

MARCO MOLINARO is the assistant vice provost for educational ef-


fectiveness at the University of California, Davis, where he created and
oversees the Center for Educational Effectiveness (CEE). The CEE team
is composed of highly specialized professionals focused on empowering

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

226 INDICATORS FOR MONITORING UNDERGRADUATE STEM EDUCATION

instructors and staff, improving the educational system, and fostering edu-
cational innovation and discovery all in service of removing disparities in
undergraduate student outcomes while maximizing learning. His work
has included STEM educational and training programs for middle school
through college, as well as undergraduate teaching. He has also been active
in creating and leading applications of technology for instruction, scientific
visualization and simulation, tools for evidence-based instructional actions,
curriculum development and evaluation, and science exhibits for students
from elementary school through graduate school and for the general public.
He is the founder of the Tools for Evidence-based Actions community, a
group of researchers and administrators from more than 100 universities
dedicated to sharing tools and methodologies that encourage evidence-
based instructional actions. He has a B.S. in biophysics and chemistry
from Wayne State University and a Ph.D. in biophysical chemistry from the
University of California, Berkeley.

ROSA RIVERA-HAINAJ is the assistant vice president of academic affairs


at Our Lady of the Lake University in San Antonio, Texas. She oversees
undergraduate education and the Rio Grande Valley Campus. Preceding her
appointment in 2017 to Our Lady of the Lake, she was the dean of science
and mathematics at Lorain County Community College. In that capacity,
she served as member of the core team of the Student Success Agenda. Pre-
viously, she held teaching positions in chemistry at Purdue University North
Central and James Madison University, and she is an active researcher in
the fields of chemistry and biochemistry. She has a B.S. in chemistry from
the University of Puerto Rico–Mayaguez and a Ph.D. in biochemistry from
Case Western Reserve University in Cleveland, Ohio.

GABRIELA WEAVER serves as vice provost for faculty cevelopment and


director of the Institute for Teaching Excellence and Faculty Development
at the University of Massachusetts, Amherst. Previously, she held positions
in the Department of Chemistry at the University of Colorado at Denver
and on the faculty at Purdue University in chemistry and science education
and as the director of the Discovery Learning Research Center. Her research
interests include the development, implementation, and evaluation of in-
structional practices that engage students and improve their understanding
of science and the institutionalization of such practices through the trans-
formation of cultures and processes in higher education. She is a fellow of
the American Association for the Advancement of Science. She has a B.S.
degree in chemistry from the California Institute of Technology and a Ph.D.
in chemical physics from the University of Colorado Boulder.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

APPENDIX D 227

YU XIE is the Bert G. Kerstetter ’66 university professor of sociology at the


Princeton Institute for International and Regional Studies at Princeton Uni-
versity. He is also a visiting chair professor of the Center for Social Research
at Peking University. Formerly, he held positions in sociology, statistics, and
public policy at the University of Michigan and as a research professor in
the Population Studies Center at the Institute for Social Research at the
University of Michigan. His main areas of research are social stratification,
demography, and statistical methods. He is a member of the the National
Academy of Sciences, the American Academy of Arts and Sciences, and
Academia Sinica. He is a recipient of the Distinguished Lecturer Award at
the Center for the Study of Women, Science, and Technology at the Georgia
Institute of Technology. He has a B.S. in metallurgical engineering from
the Shanghai University of Technology, an M.S. in sociology, an M.A. in
the history of science and a Ph.D. in sociology, all from the University of
Wisconsin–Madison.

Copyright National Academy of Sciences. All rights reserved.


Indicators for Monitoring Undergraduate STEM Education

Copyright National Academy of Sciences. All rights reserved.

You might also like