You are on page 1of 2

Virginia Dicken-Gracen

AR3: Data Management

Making informed decisions, by definition, requires having information.  What information is


considered most helpful for a given situation and how one should use that information to
improve decision-making is often a matter of popular trends, politics, and money - not simply
“what works.”  In the realm of education, for example, the No Child Left Behind Act of 2001
encouraged increased data collection in education, and this led to increased testing and a
widely expanded educational testing industry.

In 2006, the RAND Corporation (Marsh, Pane, & Hamilton) released an occasional paper
summarizing research on data-driven decision making in education.  The authors identified
four fundamental questions about such use of data: What types of data are being used for
decision making?  How are they being used?  What kinds of support are available to help
with such use?  And what factors influence such use?  They identified some key concerns
educators have about the use of data, including the time lag between data collection and the
availability of results, heavy reliance on outcome data with less availability of process data,
questions about the validity of data, and wide variability in educators’ use of available data
to inform decision making.  The RAND Corporation writers conclude that “[Data-driven
decision making] does not guarantee effective decision making” (p 10).  They share ideas for
addressing these and other concerns.  For example, they recommend providing more
support and time for educators to reflect on and use data to inform their work. 
The writers at DreamBox (August 5, 2013) also have ideas about how to make the best use of
data in educational decision making.  They point to research showing improvements in
educational outcomes when data is used at all levels, from the individual student to the
classroom to the district and county.  In addressing the need for real-time usable data about
student progress, they point away from annual summative standardized testing and toward
adaptive learning systems.  Because these systems collect data constantly, educators do not
have to wait several months for a report (that is often too late to act on within the school
year). 
In many ways, the technology-enhanced data collection being analyzed by the authors of
these two pieces is a variation of what good teachers have done throughout history:
observing student progress and classroom process to inform the next lesson plan.   As a data
nerd with a background in psychology, I am very aware of how our personal biases can
influence informal “data collection.”  Beliefs about race, gender, and class can affect what an
educator notices in a particular student’s performance.  Poor past performance may color an
educator’s expectations for future performance.  We often aren’t aware of the many factors
that bias our informal assessments.  That said, technology-mediated formal data collection is
also subject to bias.  Algorithms developed by humans can reflect those humans’ beliefs. 
Algorithms derived from past data can reflect the various cultural biases that created that
past data.  (Even hiring and web-search algorithms have been shown to reflect cultural
biases.)  In our quest to make more informed decisions, we will need to develop a robust
understanding of what data can and cannot do.  Numbers do not speak for themselves; they
are always interpreted in some way before becoming decisions.
 
References
DreamBox Learning (August 5, 2013). Data-driven decision making can improve student
learning.  https://www.dreambox.com/blog/adaptive-learning-enables-data-driven-
decision-making
Marsh, J., Pane, J., & Hamilton, L. (November 7, 2006). Making sense of data-driven decision
making in education. https://www.rand.org/pubs/occasional_papers/OP170.html

You might also like