You are on page 1of 8

Design Considerations for Conducting

Large-Scale Learning Research Using


Innovative Technologies in Schools
Melina R. Uncapher1

ABSTRACT— Since the advent of computers, scientists who study how people learn
have been utilizing technology to uncover the cognitive and neural mechanisms of
learning. Recent technological advances have allowed learning scientists to move their
research out of the lab and into the wild, to investigate how students learn in real-
world environments. However, the move from the lab to the classroom involves a
significant shift in strategy, requiring consideration of factors varying from the design
of mobile (vs. lab-based) technology to the recruitment of participants, as well as the
contextual variables to account for in the less-controlled environment of schools. Here
I discuss the learnings our group has gleaned from a research program involving over a
thousand elementary and middle school students in a longitudinal, multi-year design
that involves technologies for assessment and improving learning in schools.

The rise of every major technological innovation has been marked by equal fervor from
those who evangelize the new technology and those who think the technology will be
society’s downfall. A particularly compelling example of “technopanic” came with the
advent of the printing press—promising the new technology of books for the masses—
when Swiss scientist Conrad Gessner famously warned that books will cause moral
laziness in our girls. We see an equivalent message put forth by contemporary voices
such as Nick Carr and Andrew Keen, who posit that our interactions with technology
are changing our children’s brains in disturbing and lasting ways, and we must take
protective measures against technology at all costs.

As educators, researchers, and developers of or contributors to technology that


supports learning, we must ask ourselves: Where do we entertain technopanic and
where do we engage in technophilia (the equal and opposite feeling that technology
will solve all our problems), and how can we move toward a more considered
approach to learning technologies? As we strategize about how to innovate in edu-
cation, we must consider our biases both toward and against bringing in technology
innovations at scale in education.

1 1
Neuroscape Center, Department of Neurology, University ofCalifornia at San Francisco, Weill Institute for Neurosciences
& Kavli Institute for Fundamental Neuroscience, Sandler Neurosciences Center
One way to protect against such biases is to employ evidence-based and evidence-
informed approaches when making decisions about how to design, implement, and
engage with education technology. To this end, there is growing momentum to (1)
design education technology at the outset around principles from cognitive science
and educational neuroscience, and (2) explicitly test whether the new technologies
demonstrate evidence that they have measurably improved student learning. One
example of such momentum is the recent academic symposium on how efficacy
research is being used in education technology (EdTechnology Efficacy Research
Academic Symposium, Washington, DC, May 2017), wherein researchers reported
results from 12-month studies into the use of efficacy research in educational
technology. Studies varied from how K-12 district leaders make decisions about which
education technology tools to bring into their schools (Dexter, Francisco, & Luke,
2017),to how education technology developers use research to design new
technologies (Fuhrman & Meyerson, 2017). Findings converged on the idea that, while
everyone agrees that evidence should be used to design and evaluate technologies
that aim to improve student learning, very few practices are in place to do so. Even
fewer practices are in place to conduct research, design, and evaluation at scale. A
parallel theme emerged from the research side around significant barriers to engaging
in school-based research using mobile technologies.

In this article, I discuss our rationale behind using technology to study how students
learn in real-world environments, as well as raise some major challenges that can arise
during large-scale investigations that deploy mobile technology in schools to study
how students learn. I share strategies and practices that can be used to address these
key conceptual and methodological issues, as well as other considerations to account
for when conducting school-based learning research using new technologies. These
learnings primarily arise from a large-scale, longitudinal National Science Foundation
Science of Learning Collaborative Networks program that I direct with my colleagues
Adam Gazzaley, Joaquin Anguera, and Fumiko Hoeft (University of California San
Francisco), Bruce Mc Candliss (Stanford University), Silvia Bunge (University of
California Berkeley), Miriam Rosenberg-Lee (Rutgers University), and Jyoti Mishra
(University of California San Diego). Our research population includes students from 10
elementary, middle, and K- 8 schools2 in the San Francisco Bay Area, from public,
private, and parochial schools, with a wide variance in socioeconomic status and racial
and ethnic background. The early mobile technologies used in our network were
developed in-house at University of California at San Francisco by Gazzaley and
Anguera for use in the field (initially for clinical and home environments), and the
author led the network in adapting these technologies for use with students in school
environments.

This article is intended to provide introductory guidance for researchers preparing to


use technology innovations to translate their lab-based learning research to school

2
K–8 schools, elementary-middle schools, or K–8 centers are schools in the United States that enroll
students from kindergarten/pre-K (age 5–6) through 8th grade (up to age 14), combining the typical
elementary school (K–5/6) and junior high or middle school (6/7–8).
environments, or to scale up their in-school research to larger populations (see also
Plummer et al., 2014 for a guide to school-based research not specific to learning
technologies). It is important to note that the research field investigating the efficacy
of learning technologies in school environments is relatively nascent, and thus these
guidelines arise from investigations of our limited samples, albeit including thousands
of students and hundreds of educators. As such, these guidelines may not be
appropriate for all populations or learning environments and should be considered in
the context of one’s local research and school environments. We invite readers to
contribute future articles to expand and develop these guidelines for additional
populations and contexts.

OVERVIEW OF CONSIDERATIONS FOR CONDUCTING LARGE-SCALE LEARNING


RESEARCH USING INNOVATIVE TECHNOLOGIES IN SCHOOLS

We identified 10 major categories to critically consider when designing large-scale


research studies using in-school technologies: (1) rationale for conducting real-world
studies;(2) ethical considerations; (3) design of the technology itself; (4) assessments
versus interventions; (5) student recruitment; (6) parent, teacher , and school leader
support; (7)scheduling considerations; (8) contextual influences; (9) researcher
influences; and (10) implementation fidelity.

(The following sections were shortened)

Rationale for Conducting Real World Studies

The study of mind, brain, and education is grounded in investigations of cognitive and
neural factors that give rise to or hinder learning. A major challenge to this goal is that
we are studying a “situated brain” (Choudhury & Gold, 2011), or a brain in context,
which suggests that the way in which students learn varies according to contextual
variables such as the student’s relationships with their teacher and peers, the cultural
and social dynamics of the learning environment, and physical constraints or supports
to learning (see also Cantor, Osher, Berg, Steyer, & Rose, 2018). One way to begin to
address this challenge is to move our scientific inquiry out of the lab (at least partially)
and into the “wild” of the classroom and home learning environments. This move
toward investigations with greater ecological validity requires technological
innovations that allow valid and reliable measurement of learning and the factors that
contribute to learning, in the highly uncontrolled environments of schools and homes.

For example, while there is much interest in understanding the neurocognitive


relationships between executive functions and learning, they remain largely
underspecified, mainly due to measurement challenges and poor construct oper-
ationalization (e.g., Blair, 2016). Furthermore, the ways in which EFs may be deployed
during learning can vary widely by context. For these reasons, we conduct large-scale
investigations of the relationships between EFs and learning in school environments,
and how those relationships develop over time. Our large-scale investigations utilize
novel mobile assessment technology that enables multiple measurements using the
same tasks across cohorts and across time (within the same subjects). This
methodology may allow us to begin to address some of the measurement and
construct challenges that have plagued the field.

Ethical Considerations

An important first factor to consider when weighing the benefits and costs of using
learning technologies to move lab-based studies into real-world environments is
whether one’s assessments and/or interventions may cause harm or raise any ethical
concerns when conducted in schools. Academic as well as social/contextual factors
should be considered. First, conducting assessments or interventions during class time
will take “seat-time” away from instruction. U.S. public schools are allocated resources
based on their performance on students’ standardized test scores, so if research
studies take time away from preparing students for such tests, the research could in
fact interfere with resource allocation in schools who may already be budget-
impacted. While real-world settings can provide access to a much larger sample of
participants than typical lab studies, thus yielding much greater power, it is important
to minimize the opportunity cost of research over seat-time. Conducting power
analyses to determine the smallest possible sample size that is required to answer the
question of interest can help mitigate this concern.

Beyond academic factors, it is important to consider social factors that may cause
ethical challenges when conducting in-school research. As in lab studies, in-school
studies require both parental/guardian consent as well as child assent, and there will
almost certainly be a proportion of parents or students who choose not to participate
in research. It is important to ensure that the students who choose not to participate
or whose parents/guardians choose to not let their child participate do not feel
excluded or like they are missing out on critical instructional time. Likewise, it is
important to ensure that participating schools do not foster a culture that coerces the
students or parents to participate.

Design Considerations for Mobile Learning Technologies

If a primary aim is to investigate student learning over time, the main challenge is to
identify tasks that can be implemented with fidelity across repeated time points.
Regardless of the way in which difficulty is adjusted, when student performance is held
relatively constant around a specified level of accuracy, researchers can employ the
same task across time and can quantify learning by measuring changes in speed or
speed-accuracy metrics.

Beyond being able to quantify learning in individual students across time, an additional
benefit of utilizing designs where task difficulty adapts to performance is that the same
task can be deployed across students who differ in age or ability. This allows for more
direct comparisons of cognitive and neurobiological factors that may change across
age and ability. Use of the same tasks with individually varying levels of difficulty in the
same students over time, or in students of different ages, could provide insight into
whether the findings that contradict this “differentiation hypothesis” may be due
merely to methodological factors instead of cognitive or neurobiological differences.

Assessments and Interventions

In order to understand a student’s current proficiencies and how to support growth in


those proficiencies, we need to first assess their current proficiency profile, and then
develop strategies and tools to improve learning over time. In an ideal environment,
these two components—assessment and intervention—would exist in a feedback
loop3, so that we can continuously assess current proficiency state, and feed that
assessment into flexible intervention technologies (or pedagogy or other improvement
strategies) that will adapt according to the continuous state of the learner. This
“closed-loop” process has been deployed effectively in other fields such as
neurofeedback (La Conte, 2011; Mishra & Gazzaley, 2015).

In practice, closed-loop protocols in education technology are in their infancy. This is


partially due to the technology being in early stages of development, but may also be
due to the way that assessment and intervention technologies are viewed by students,
parents, teachers, and school leaders. Technology is often met with either resistance
or enthusiasm, depending on the stakeholder group and the technology. On the one
hand, assessments can be seen as quite useful for teachers and school leaders in
guiding strategies to improve learning. However, students and parents can see
assessments as a way to pigeon-hole students, generally with negative consequences.
To mitigate this resistance on the parent and student side, researchers can assure
parents and students that results of the research study are not typically shared with
schools, and will not be shared in a way that identifies student data without explicit
parental consent. Researchers should also work to raise awareness that assessment
technologies are meant to be away for educators and researchers to understand how
the minds and brains of students learn under many different conditions, and are not
used to determine whether a student is “smart” or not.

Recruiting Student Participants with Parent, Teacher and Educator Support

Before researchers can inquire whether or not a student and her parents or guardians
would like to participate in a research study, there are several additional stages of
support the researcher must garner: from district and school leadership, as well as
from teachers and staff. Effective recruitment often requires all groups to be engaged.
Larger scale studies typically require district leadership (superintendents, assistant
superintendents, and/or schoolboards) to set the tone for why the district supports
using research-based learning technology, why teachers and staff will be asked to
support the research studies, and why students and parents will be asked if the
students would like to participate in the research studies.

Beyond gaining support from district leadership, we find it equally important to engage
3
A system for improving a product, process, etc. by collecting and reacting to users' comments:
in deep dialogue with each school about the potential benefits and burdens of the
research study. Each school is a mini-ecosystem and has unique “problems of practice”
that should be considered. Each school will also have different experiences with or
perspectives on the use of learning technologies, so it is important to uncover
institutional biases and histories for each participating school. The school principal
(and often vice principal s)) is typically the strongest advocate for or against the
introduction of new programs or technologies, so it is critical to garner support from
the school leadership. Even with the support of leadership, however, you will often be
working directly with teachers and staff, so it is equally important to engage the
teachers and staff in the conversation around the potential benefits and burdens
involved in studying learning using technology.

Finally, it is critical to appreciate the perspectives and concerns of the parents in the
participating schools. It is particularly important to emphasize the research nature of
the program and not to overpromise the potential outcomes of the study. Promising
access to learning technologies that may improve academic achievement may be
incentivizing to families with less access to technology. It is therefore very important
that the researchers and the school representatives of the research program are very
clear that participation is always voluntary and be discontinued at any time the stu-
dent or parent chooses. Lastly, it is also essential to consider the parents and students
that choose not to participate in the study: they should not be made to feel they are
missing out on a learning opportunity, and should be provided equivalent instructional
time.

Scheduling Considerations

A major design consideration is whether to conduct the research during class time, in
after-school programs, or at home. Each of these scenarios has advantages and disad-
vantages, as well as ethical considerations.

Conducting research in class versus after-school programs can allow more students in
the school to participate, as not all students are enrolled in after-school programs.
However, after-school programs typically have more leeway in curriculum, as they are
not always academically oriented. By contrast, in-class research designs require
permission from each teacher to take “seat-time” away from instructional activities,
which teachers may be (appropriately) resistant to.

In-class and after-school research can allow the research team to facilitate the sessions
directly and more tightly control the research environment relative to an at-home
design where it is difficult to provide interactive instructions or field questions as they
arise during gameplay, to determine whether the student is the only participant
playing the game (or whether instead it is their friend, sibling or parent playing), to
remotely troubleshoot the game or device, or to monitor and retrieve data when the
device is not connected to the Internet (which families may not have equal access to in
the home). There is also more device loss and damage when students are allowed
device possession outside of school. However, at-home designs are often necessary for
research that requires multiple sessions, such as interventions.

Contextual influences

Where, when, and with whom the students are interacting while engaging with the
learning technologies can affect how they perform on the tasks.

Additional factors to account for (if not control for) are whether research sessions are
conducted in the first or last period of the day, or just after lunch or physical activity.
Teachers observe very different levels of energy and engagement during these times.

Researcher influences

Because our research team interacts directly with thousands of students, some as
young as 7 years old, we recognize that we may be the first scientists that the students
may have directly interacted with in person. Because of this, we feel a social
responsibility to frame our interactions in a way that students can form a positive
impression about science and scientists.

In-class sessions we always begin with a framing conversation about (1) how we are
scientists who study the brain and how people learn, (2) how we use digital
technology and “games” to conduct our science, and (3) how we have built games that
help us understand how their brains are so amazing. This three-stage framing allows us
to convey the impressions that science can be fun, and that scientists can look like
students. It is also a helpful way to mitigate the concern that the students are being
scrutinized, by collaborating with students in the scientific endeavor.

Implementation Fidelity

One of the primary challenges of moving research studies out of the lab and into the
school or home is maintaining equivalent experimental control over the fidelity of data
collection. This challenge is particularly salient when moving from 1:1 interactions
between researchers and participants in the lab (which allows for tight monitoring of
performance and compliance), to in-school designs which typically involve an entire
class or several classes engaging with the leaning technology tasks at once. This
requires a team of researchers that can simultaneously monitor many students’
performance and compliance. We typically have a team of five–eight research
assistants and research staff for every class (of ∼20–30 students each).

Conclusion

In this commentary, I have introduced factors and concerns that our group has
considered and learned while conducting recent in-school research that includes
developing and testing novel learning technologies in over a thousand research
participants in elementary and middle schools. In this work, we deploy learning
technologies that are designed to assess and train cognitive and academic outcomes,
and are deeply informed by cognitive science and neuroscience. We appreciate that
while every school and district has their unique challenges and histories, we have
identified commonalities that can support successful in-school research across many
factors including demographics (e.g., racial/ethnic and socioeconomic), category of
school (public, private, and parochial; elementary and middle), size of school (large,
medium, and small), overall performance of school (as indexed by state standardized
tests), and access to technology. We conclude that while in-school research is
extraordinarily more difficult and messy than in-lab research, it provides powerful
insight into how students learn in authentic real-world environments. Innovative
learning technologies—when deployed in real-world environments such as schools and
homes—can provide unprecedented insight into how student cognition develops over
time, and how we can utilize these insights to design more effective learning
technologies and environments. If the ultimate goal is to advance evidence-informed
education (and practice-informed research), the future of education may be advanced
by how we bridge research insights with practical considerations.

You might also like