You are on page 1of 14

Lessons and Dilemmas: Implementing and Evaluating a College-Wide iPad Project

Presenters Charles Timothy (Tim) Dickel, EdD And Maya M. Khanna, PhD Creighton University

EDUCAUSE Connect Chicago, Illinois March 18, 2014

PROJECT OVERVIEW
November 2011: AVP and Director of Undergraduate Admissions believe an iPad Program in the College of Arts and Sciences will make the institution more attractive to potential students and their families. December 2011: An associate dean of the College is asked to direct the project, and Dickel and Khanna are invited to a meeting to discuss how to evaluate the proposed project. Freshmen entering the College in the fall of 2012 will be encouraged to bring an iPad.

Question 1

How would you suggest approaching this challenge, given your role at your college or university?

Question 2

How would you implement this project across a undergraduate college of 2500 students?

Question 3

What would you do to attract faculty involvement in this project?

Question 4

How would you prepare recruited faculty to teach using an iPad?

Question 5

What would you suggest as the outcomes for this program?

Question 6

How would you evaluate this project across a undergraduate college of 2500 students?

Student Learning Data Comparing iPad and NoniPad Course Deliveries

Non-iPad Sections: 238 out of 687 students completed both the pre- and post-test sets. iPad Sections: 174 out of 198 students completed both the pre- and post-test sets.

Faculty Response to Pre/Post Content Assessment

Content Assessments
We asked each instructor to design an exam that would gauge students understanding of the most central/important content from their course. We asked that they use a 25 multiplechoice question exam. There was some variability in the type of assessments that instructors designed.

Pre/Post Content Assessment


Instructors submitted student scores on the pre- and post-tests. We calculated the proportion correct for each student respondent. Non-iPad Sections: 238 out of 687 students completed both the pre- and post-test sets. iPad Sections: 174 out of 198 students completed both the pre- and post-test sets.

Analyses of Pre/Post Sets


For the analyses, we used only the participants with both pre- and post-test available We conducted a repeated-measures ANOVA using time of test (pre vs. post) as a withinparticipants variable and mode of instruction(iPad section vs. non-iPad section) as a between-participants variable (participants are the students).

Comparison of iPad and Non-iPad Deliveries


80 70

There was an interaction between time of test and iPad use (F(1,390) = 17.14, p = .000.
There was a main effect of Time of Test, F(1,390) = 888.9, p = .000. There was a marginal effect of instruction mode on test scores (F(1,390) = 2.86, p = .093)

60

50

40 68.024 30 66.04

20 30.62 10

37.85

0 ipad Sections Non-iPad Sections