You are on page 1of 7

STRENGTHENING THE TRUST IN ONLINE COURSES:

A COMMON SENSE APPROACH


*
Ronny Richardson and Max North
Business Administration Department
Southern Polytechnic State University
rrichard@spsu.edu and max@spsu.edu
ABSTRACT
Online course offerings are one of the most effective and efficient delivery
methods for contents and skills globally. However, the level of trust in online
courses is very low, particularly in respect to the credibility of online
assessments. Simply, if the trust in online assessment can be equated to the
level of the classroom assessment environment, trust in online courses will
increase accordingly. To investigate and strengthen the trust of online
assessment, the researchers have incorporated a simple and obvious solution:
proctoring the tests portion of the online courses and comparing with non-
proctored exams. In this study, 11 proctored exams against 22 identical non-
proctored exams. In 19 out of 22 cases, the class average on the non-proctored
exam was higher than the class average on the corresponding proctored exam;
the difference was statistically significant at the 95 percent trust level in 15
cases. Findings indicate that proctoring seems to be an effective common sense
approach and in turn potentially strengthening the trust in online courses and
distance education.
INTRODUCTION
Although trust in online courses and degrees seems to be low, online course
offerings are one of the most effective and efficient methods of delivery of content and
skills. There are many common senses advantages to the online course approach,
including the wide flavor of degree programs and classes offered, flexible study times,
and the ability for students to balance between career and education. The online
movement is evolving dramatically, leading to more creative philosophies such as
___________________________________________
Copyright 2013 by the Consortium for Computing Sciences in Colleges. Permission to copy
*
without fee all or part of this material is granted provided that the copies are not made or
distributed for direct commercial advantage, the CCSC copyright notice and the title of the
publication and its date appear, and notice is given that copying is by permission of the
Consortium for Computing Sciences in Colleges. To copy otherwise, or to republish, requires a
fee and/or specific permission.
266
CCSC: Central Plains Conference
Massive Open Online Courses (MOOC), which promotes sharing information and has
created many opportunities for teaching and learning in a variety of disciplines [1].
Among the pioneers of the MOOC is the MIT OpenCourseWare-provided, individual,
self-paced learning environment and the recently incorporated, open-source learning
management system Moodle. At a more limited level, many universities and colleges are
offering either Hybrid or Fully online courses in many departments, such as Business
Administration, Accounting, Mathematics, English, Physics, and Chemistry, just to name
a few. However, the level of trust in online course outcomes, and especially in assessment
(e.g., exams), has been very low [2]. In general, the delivery method of the online courses
is commonly acceptable, but assessment lacks credibility, due to the possibility of
cheating in many different ways while taking the online tests [3, 4, 5]. Therefore, if the
element of the cheating can be reduced to the level of the classroom testing environment,
the trust and credibility of the online course assessmentand by extension, online course
outcomeswill increase tremendously [6, 7, 8]. To strengthen the trust of online
assessment, the authors have devised a simple solution by proctoring the tests portion of
online courses. To validate this approach, a series of the courses were selected and an
extensive experimental research was conducted.
METHODOLOGY
To validate our approach, the testing facility in the XYZ University (XYZ)
Continuing Education Center arranged proctored testing services for four sections of
Online Business Administration courses.
Specifically, four courses were selected for proctoring (see Table 1). These courses
ranged from freshman undergraduate to advanced graduate courses. For each of these
classes, except ACCT 6003, the authors had testing results without proctoring from
Spring 2011 and Fall 2010, and there were no significant changes to the courses or exams
over this period. A direct comparison, therefore, made sense. ACCT 6003 is only taught
in the fall, so we were initially going to compare Fall 2010 to 2011, and we did indeed
use that comparison. However, we ended up offering ACCT 6003 in the first and second
eight weeks of Fall 2011. Thus, the first eight weeks were proctored and the second eight
weeks were not proctored, creating two non-proctored sections for comparison for all four
courses.
Experimental
Courses
Exam Structures and Miscellaneous
ACCT 2101
Accounting I
This course is taken by a number of majors on campus. It has four exams, three
chapter exams, and a comprehensive final exam. All exams are true/false and
multiple-choice, so there is no possibility of instructor bias in the grading.
ECON 1101
Introduction to
Economics
This course is a core class. It has three exams, two chapter exams, and a final
exam. All three are true/false and multiple-choice, and there is no instructor
involvement in the grading, so there is no possibility of instructor bias in the
grading. In fall 2011, all three exams were proctored.
267
JCSC 28, 5 (May 2013)
MGNT 3105
Management
and
Organizational
Behavior
This course is taken by a number of majors on campus and is required for a minor
in Business, so many students take it. It has four chapter exams and a
comprehensive final exam. Extended University felt this was too many exams to
proctor, so only the first and third chapter exams and the final exam were
proctored. The second and forth chapter exams were not proctored, which gave
us the ability to compare proctored and non-proctored exams within the same
course. All exams are true/false and multiple-choice, so there is no possibility of
instructor bias in the grading.
ACCT 6003
Accounting
Theory
Only taken by students in the Master of Science in Accounting graduate program.
This class uses cases and other non-exam material, so there is only one exam in
the class. Unlike the other exams in the study, this course did not use an exam
that was exclusively true/false and multiple-choice. For this exam, 76 percent was
true/false and multiple choice, while 24 percent was instructor-graded short
answer and essay questions. Since only 24 percent was instructor-graded, it was
felt that the impact of any instructor bias in grading would be minimal.
Table 1. Exam structures and miscellaneous information of the Experimental
courses
Fifty five (55) students attended our facility, and an additional ten (10) students
attended external test proctoring sites not directly connected with XYZ. The number of
tests varied by class. For the students who took tests at our facility we were involved in
scheduling testing times within the guidelines prescribed by the instructors, verifying the
identification of the test takers, providing secure testing computers and monitoring the
test takers for the duration of the test. These services were provided as part of an
experiment at no charge to the students of the Business Administration department.
For the students who took tests at external sites, we were involved in helping the
students identify acceptable testing locations in their area, communicating the proctoring
requirements to these locations, and reimbursing the students for fees paid to the external
proctoring sites. The Continuing Education Center absorbed the cost of the test
reimbursements.
To make comparisons between courses meaningful, all exam scores were scaled to
100 points before computing the mean and standard deviations. (Unscaled points ranged
from 50 to 1,000.) The median was also computed to check for outliers but comparing the
means to the medians did not raise any concerns regarding outliers.
RESULTS
Collectively and distinctly, this experiment provided us the results for eleven
proctored exams. In ten of those eleven exams, the score on the proctored exam was
lower than on both of the two non-proctored exams to which it was compared, often by
very significant amounts. The single exception was the final exam in ECON 1101, where
the proctored final exam results were virtually identical to the non-proctored results. The
largest difference between proctored and non-proctored exams was in ACCT 2101, and
the smallest difference was in the graduate ACCT 6003 course. Detailed results of the
selected courses are given on a course-by-course basis in Table 2.
In order to provide a quick glance at the results of this experiment, we are providing
the following Table 3 which compares the non-proctored exams to proctored exam
showing any statistically significant difference at the 95% trust level.
268
CCSC: Central Plains Conference
Results of Non-Proctored Exams Compared to Proctored Exams
Spring 2011
Significant
at 95%
Fall 2010
Significant
at 95%
ACCT 2101
Exam #1 Higher Yes Higher Yes
Exam #2 Higher No Higher No
Exam #3 Higher Yes Higher Yes
Final Exam Higher No Higher Yes
ECON 1101
Exam #1 Higher No Higher Yes
Mid-Term Lower Yes Higher Yes
Final Exam No Difference No No Difference No
MGNT 3105
Exam #1 Higher Yes Higher Yes
Exam #3 Higher Yes Higher Yes
269
JCSC 28, 5 (May 2013)
Final Exam Higher Yes Higher Yes
ACCT 6003
Exam #1 Higher No Higher Yes
Table 3. Results at a glance of Non-Proctored Exams compared to Proctored Exams
To further assist the readers with a quick graphical view of the results, we are providing
yet another high level summarized line graph depicting the non-proctored and proctored
exams for all four courses in this experiment. See Graph 1.
CONCLUSIONS
Nineteen out of 22 exams had higher average scores when they were not proctored
than when they were proctored. This is much higher than the eleven expected if the
results were randomly distributed. This is a strong indicator of significant cheating.
While we would expect to see eleven non-proctored exams have higher average
scores than the proctored exams if the data were randomly distributed, we would not
expect them to be statistically significantly higher. However, fifteen out of the nineteen
differences were statistically significantly higher at the 95% trust level. This too is a
strong indicator of significant cheating.
Additionally, it was noted that: (i) When some of the exams in a course were
proctored and other exams in the same course were not proctored, grades on the non-
Graph 1. High level summarized line graph showing both non-proctored and
proctored
exams
270
CCSC: Central Plains Conference
proctored exams were also lower than the same non-proctored exams in other sections of
the course. (ii) The difference between proctored and non-proctored exams was much
lower at the graduate level than at the undergraduate level.
Since the non-proctored exams in the proctored section of MGNT 3105 had average
scores in line with the averages on the proctored exams and lower than the averages on
the comparable exams in the non-proctored sections of MGNT 3105, proctoring some of
the exams appears to be a strong deterrent to cheating even when not all of the exams are
proctored.
DISCUSSION AND RECOMMENDATIONS
From our perspective, the test proctoring experiment was conducted without any
significant problems. There was a small learning curve for both instructors and students
who had not been involved with proctoring, but nothing that warrants any changes to our
already established procedures. In the future, we would suggest that a meeting be held
with all faculty new to on-line proctored testing prior to the beginning of the semester.
In order to accommodate future demand for proctoring it will be important for the
testing facility to be informed before the beginning of the semester, how many classes,
students and exams will require proctoring, as well as the dates for the exams. Instructors
may need to be flexible about the dates for testing, since it is likely that testing windows
across classes will tend to coincide. It may be necessary for administrators to enforce
flexibility from their instructors to accommodate the finite resources of the testing
facility.
REFERENCES
[1] Martin, F.G., Will massive open online courses change how we teach? Communications
of the ACM, 55, 8 (August 2012), 26-28.
[2] Prince, D.J., Fulton, R.A., Comparisons of proctored versus non-proctored testing
strategies in graduate distance education curriculum, Journal of College Teaching &
Learning, 6, 7 (November 2009), 51-62.
[3] Rowe, N., Cheating in online student assessment: Beyond plagiarism, Online Journal of
Distance Learning Administration, Retrieved on June 1, 2012 on www.westga.edu,
2004.
[4] Bosch, T., Why Would Someone Cheat on a Free Online Class That Doesnt
Count Toward Anything? Posted Monday, Aug. 20, 2012, at 2:48 PM ET
http://www.slate.com/blogs/future_tense/2012/08/20/coursera_plagiarism_why_
would_students_cheat_ina_free_online_class_that_doesn_t_over_academic_cre
dit_.html
[5] Rogers, C. F., Faculty perceptions about e-cheating during online testing,
Consortium for Computing Sciences in Colleges, 22, 2 (December 2006), 206-
212.
[6] Greenberg, R., Online Testing, Techniques: Making Education & Career
Connections, 73(3), 26-28, 1998.
271
JCSC 28, 5 (May 2013)
[7] Baker, R., Papp, R., Academic Integrity Violation in the Digital Realm,
Proceedings from the Southern Association for Information Systems 2003
Annual Conference, 193-202, 2003.
[8] Carlson, R., Assessing Your Students: Testing in the Online Course, Syllabus,
12(7), 16-18, 2000.
272

You might also like