You are on page 1of 10

The Entrepreneurs 1

Analyzing Data from an Introduction to Information Literacy Course:


Pre-/Post-Test Results


Deanna Brown
Andrew Carlos
Callie Chastain
Margaret (Meg) Medonis
The Entrepreneurs 2
Introduction
Assessment of student learning is an important task that must be accomplished
throughout instruction - in this way, instructors and institutions can find out if students are
learning what they need to know and to adjust their teaching to benefit students. This assessment
can be accomplished through a pre-/post test. On the following pages is presented a summary
and analysis of a subset of data from a pre-/post test provided to freshmen students at Cal State
East Bay.
At Cal State East Bay, all freshmen are required to take Introduction to Information
Literacy (LIBY 1210) to make sure that they have baseline information literacy skills (such as
searching for databases, evaluating information sources, citing articles, etc.). In each of the 30 or
more sections taught throughout the year, students are given a pre-/post-test to measure their
information literacy skills.
Description of the Measure
The assessments are titled Library Pre-/Post-Test Survey and are from Introduction to
Information Literacy (LIBY 1210), a course taught at Cal State East Bay in Hayward,
California. To comply with the California State University systems mandate that each campus
have an information literacy component, the University Libraries offers this course in
information literacy to incoming freshman.
Each section of the course is designed by individual instructors; however, the learning
objectives for the courses remain the same.
1. Formulate a research question
2. Develop and apply appropriate search strategies
3. Evaluate strategies and resultsrevise as needed
The Entrepreneurs 3
4. Describe research processes and communicate results
5. Understand and apply principles of information ethics
The assessment used is a pre-/post-test pairing administered before and after the course.
Since the assessment is anonymous, the pairing is designed to compare the achievement of the
class as a whole, rather than the achievement of individual students. The assessment is 20
questions long, administered with the title Anonymous Survey and contains four multiple
choice answers for each. Option D is I dont know for all twenty items. The assessment is
offered as an online exam to online-only students and as an in-person exam for hybrid classes.
On the surface, the goal of the assessment is to measure students understanding of
information literacy before and after the course. However, as the assessment is anonymous, their
grades are unaffected. A secondary goal is to measure whether the instructor was able to provide
instruction on information literacy effectively.
Score Interpretation
In the case of the library pre-/post-test results, the score we see in the data file is the
number of students who chose a specific answer. This is a raw number - not a percentage,
percentile, etc.

For example, in the above example, cell A3 would indicate that 6 students chose A as
their response. This format persists throughout the entirety of the file.
The Entrepreneurs 4
Additionally, we can also look at this data as potential percentage scores.

In the example above, we have the total number of responses at 271, with 185 correct
responses given. Using these data points, we can see that 68% of students responded with the
correct answer.
As previously stated, the goal of this pre-/post-test is to measure the learning that has
occurred throughout the ten week quarter. The hope is that the data shows an improvement, from
beginning to end, in students understanding of basic information literacy topics. We can do this
by looking at the percentage change from the pre-test to the post-test and hope for a positive
change.
This interpretation is accurate as one of the group members is a librarian at Cal State East
Bay and has been working with this data for the past two years. This group member also
consulted with the head of instruction at the University Libraries to make sure this interpretation
was accurate.
Data Usage
This data is used in multiple ways in the work and school environment. For individual
instructors of the course, this pre-test can be thought of as a summative assessment. A positive
change in terms of understanding some basic concepts could indicate successful teaching, while
a negative change could signify some issues with teaching.
The Entrepreneurs 5
Additionally, the University Libraries could use this data as a way to show the benefits of
the course - in this way, administrators are able to see that though the change is only minor, there
are positive changes occurring because of the existence of this class. In terms of future usage, the
University Libraries is hiring an assessment-focused librarian, whose main job duties will
include working with assessment data and developing reports and action items from these results.
In the real world, this data is incorporated into annual reports written by the head of
instruction that describes developments within our instruction program. These reports are read by
the Dean of the Library, the Provost, as well as various other Academic Senate committees. This
data is also reported to accrediting bodies such as the Western Association of Schools and
Colleges (WASC). Though individual instructors could use this data to determine where there
are gaps in their instruction, in reality this rarely occurs.
Consequences
The library faculty at Cal State East Bay use the data gathered from this test to evaluate
the content and teaching of their Introduction to Information Literacy (LIBY 1210) course. For
example, looking at the differences between the pre- and post-test data, the faculty can see that
students correctly answered questions 5 and 15 fewer times on the posttest than they had on the
pre-test. This decrease (-12% and -11% respectively), can inform the faculty that more time or
explanation is required when covering those topics.


The Entrepreneurs 6


However, the faculty should not simply look at the overall percentage change, but also
look deeper into the data. Theyll learn that students could be misunderstanding a different
concept altogether. For example, in question 5 (Total A is the correct answer), the students are
not only confused about response A, but also response C. The faculty should ask themselves,
why has there been a 14% increase of students incorrectly selecting C after completing the
course?
The faculty may also consider the test instrument itself: When you need to understand a topic
that is unfamiliar to you, there are several places to begin. Which one would you NOT choose?
Students may find the use of the negative NOT to be confusing.
The Entrepreneurs 7
The significance of the successful completion of the Information Literacy course could be
the difference between making it through college or dropping out. According to Samson (2010),
the critical need for students to be knowledgeable about finding, retrieving, analyzing, and
effectively using information has become a campus-wide issue (p. 202). This course enables
students to better understand how to acquire the information they will need to be successful in
college and as lifelong learners. If the students dont do well in the course they will be at a
disadvantage: research will be more difficult and take longer to complete.
The other consequence of the Information Literacy course involves the institutions
accreditation. Cal State East Bay is accredited by the Western Association of Schools and
Colleges (WASC). A core competency of the accreditation is information literacy (WASC,
2013). Information literacy needs to be incorporated throughout the curriculum and students
need to have a basic understanding beginning their first year. Libraries throughout the country
have developed information literacy programs to become their primary focus, (Xiao, 2010).
Quality of Data
Before making use of the data collected, the conditions of the tests should be considered
to determine if it is appropriate to use the data.
The first consideration is whether or not the test is statistically fair. Issues of validity,
bias, access, administration, and any social implications/consequences should be reviewed. In
terms of validity, tests should be assessed to be sure that they accurately test the objective,
whether they meet the criteria, and how reliable the results are. In looking at the test, multiple
choice is used to determine whether or not the learners have acquired learning. Multiple Choice
questions allow for guessing which could explain how seven out of twenty learners received a
lower score on the post-test versus the pre-test. However, the material is specific to the learning
The Entrepreneurs 8
goals. The test scores have a range of 51% for the pre-test and 60% for the post-test. In
conclusion, the test had content validity, but poor reliability.
Next, bias should be examined and eliminated. The test that the learners took after
instruction did not contain any language that could be deemed offensive. In order to remove
bias, the test should be scored on a common rubric. This is when multiple choice, where only
one answer is correct per question, can be beneficial.
Students should also have access to preparation for the text. All learners who took this
assessment were engaged in lectures and were directed to the same reading material. Tests
should be set up that all students have had the opportunity to engage in the same test preparation
and are given the test with materials that are equally familiar to all students. The test
administration should also be uniform across all students. The test must be securely administered
so that students are not given additional preparation time or access the test before other students.
Students should also be given a uniform test in terms of content. Finally, the testing conditions
should be optimal for all students.
Considering these factors of this test, should the results be used? Yes. The test was
uniformly administered to all students before and after instruction. The content and language of
the test do not contain offensive or derogatory words. The test was administered in a way that all
students felt comfortable with. As such, the test could be viewed as fair and thereby usable.
However, one should note that 35% of tests had a decrease in score. This should be examined to
determine if there is a pattern of students getting a category of information incorrect which could
imply that the teacher did not provide clear instruction on that topic. If there is not a pattern,
then it could mean that students are guessing and so the results of this test are random.
The Entrepreneurs 9
Beyond concerns with the test, the data file itself is difficult to navigate and understand.
Without a legend or some sort of description, it is hard to determine what the scores actually
represent. The design of the data does not make it easy to navigate as well - as it is multiple
columns and rows of numbers with no demarcations or lines.
Conclusion
In analyzing data obtain through a pre-/post-test, our group was able to obtain a working
knowledge of different ways to interpret this information and to evaluate this data. By using data
that measured an objective unfamiliar with most group members, we were able to better think
critically about issues with this data and the way it was obtain and stored.

The Entrepreneurs 10

References
Mission and goals. (2013). In CSU East Bay Library Undergraduate Catalog. Retrieved from
http://www20.csueastbay.edu/ecat/undergrad-chapters/u-liby.html
Samson, S. (2010). Information literacy learning outcomes and student success. The Journal of
Academic Librarianship, 36(3), 202-210.
WASC. (2013) Handbook of Accreditation. Retrieved from
http://www.wascsenior.org/content/2013-handbook-accreditation
Xiao, J. (2010). Integrating information literacy into blackboard. Library Management, 31(8/9),
654-668.

You might also like