You are on page 1of 19

151

Chapter IX
Validation of E-Learning
Courses in Computer Science
and Humanities:
A Matter of Context

Robert S. Friedman
New Jersey Institute of Technology, USA

Fadi P. Deek
New Jersey Institute of Technology, USA

Norbert Elliot
New Jersey Institute of Technology, USA

ABSTRACT

In order to offer a unified framework for the empirical assessment of e-learning (EL), this chapter presents
findings from three studies conducted at a comprehensive technological university. The first, an archival
study, centers on student performance in undergraduate computer science and humanities courses. The
second study, a survey given three times within EL classes, investigates the variables of learning style,
general expectation, and interaction in student performance. The third study investigates student per-
formance on computer-mediated information literacy. Taken together, these three studies—focusing on
archival, process, and performance-based techniques—suggest that a comprehensive assessment model
has the potential to yield a depth of knowledge allowing shareholders to make informed decisions on
the complexities of asynchronous learning in post-secondary education.

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Validation of E-Learning Courses in Computer Science and Humanities

INTRODUCTION approximately three-quarters of the 150 EL class


sections (15-week semester cohorts) offered were
The aim of this chapter is to present evidence in computer science or information systems at the
derived from three studies of e-learning (EL) undergraduate and graduate levels. Today, with
targeting student outcomes. The studies were 5,585 undergraduate students and 2,822 graduate
undertaken to identify a profile for potential suc- students enrolled in the fall of 2008, demand for
cess and support policy guidelines for limiting EL remains strong, and the disciplinary distribu-
registration and course enrollment. This aim is tion of the approximately 60 EL sections offered
achieved through discussion of three studies that during the current academic year is balanced
were conducted at a comprehensive technological among majors in computer science, information
university in the United States, where EL has his- systems, information technology, and manage-
torically been offered as an asynchronous, online ment; several of the university’s general university
alternative to traditional face-to-face classes, with requirements in humanities are, as well, offered in
content management systems such as Webboard, an EL format. All class teachers are historically
WebCT and Moodle providing a “virtual class-
committed to delivering high quality instruction,
room” environment;; the difference in persistence
and all are committed to an empirical base for
and success rates between the two modes of course
decision-making regarding the evaluation of these
delivery, however, are comparatively lower for
courses to facilitate positive student outcomes in
online sections. The first study is archival and
terms of success with courses and degree comple-
centers on student performance in undergraduate
computer science and humanities courses. The tion (Foster, Bower, & Watson, 2002).
second study, a survey distributed three times This chapter presents our efforts—through
within humanities EL courses, investigates the archival studies, surveys, and performance mea-
variables of learning style, general expectation, sures—to come to terms with the complexities of
and interaction in student performance. The offering E-Learning courses. Within an environ-
third study investigates student performance on ment that would be, in mission and vision, ideally
computer-mediated information literacy tasks. suited to successful asynchronous instruction, it
Our findings confirm not only that students in EL might be imagined that all measures would report
sections lack the required technical and informa- incomparable success, yet such is not the uniform
tion literacy skills to succeed. Taken together, case. While much has been gained in our under-
these archival, process, and performance-based standing of the complex variables and validation
techniques warrant a unified framework for the processes involved in justifying information,
empirical assessment of EL. Our findings suggest much remains to be done (Millwood & Terrell,
that a comprehensive assessment model has the 2005). Our studies reveal that a triangulated model
potential to yield a depth of knowledge allowing is promising when a variety of shareholders in-
shareholders to make informed decisions on the vestigate what really happens in asynchronously
complexities of asynchronous learning in post- offered undergraduate courses.
secondary education.
New Jersey Institute of Technology (NJIT) has
a history in e-learning (EL) beginning in the Archihiudy: Two
1980s when researchers in the university’s De-
Diciplii
partment of Computer and Information Science
created and deployed the Electronic Information
Research on rates of student success in EL classes
Exchange System for use in the original Virtual
tends to be drawn from single samples. Yet com-
ClassroomTM (Hiltz & Turoff, 1993). In 2002, with
parison of two disciplines—one invested in re-
computing education at its enrollment height,

152
Validation of E-Learning Courses in Computer Science and Humanities

sponding to empirically-oriented tasks in a limited in prior years by the same instructor in a face-to-
response format (responding to multiple-choice face format (Mock, 2003). These numbers were
questions or creating computer code), the other not greatly different from those within computing
invested in responding to verbally-oriented tasks programs at NJIT.
in a open response format (participating in on- During the spring of 2003, administrators
line discussions or submitting essays)—seemed in the College of Computing Sciences (the ad-
ideal in allowing more to be known about the ministrative transformation of the then-expand-
specifics of EL across disciplinary frameworks ing Department of Computer and Information
(Elliot, Friedman, & Briller, 2005). Science) determined to address the disparity
between EL classes’ low passing rates relative
Analysis of Undergraduates Enrolled to sections offered in a traditional face-to-face
in Computer Science Courses (FTF) format. To validate that perception and
to warrant changes in enrollment policy, student
In 2002, administrators and faculty, attempting to records were extracted from fall of 1996 to spring
address the existence of low student success rates of 2002. The total sample of students for EL
in EL classes—and, subsequently, lower time to classes was 2,554, as Table 1 demonstrates, while
completion and graduation rates—learned they the total number of students for the FTF classes
were not alone. In the USA, retention in E-Learn- was 15,468. Academic preparedness, withdrawal
ing is a recurring issue. Low perseverance rates rates categorized according to grade point average
are found to fall in disproportional ranges when (GPA), and class repetition were addressed in the
compared to traditionally taught classes (Institute archival study.
for Higher Education Policy, 1999; Zimmerman, Taking the college admission SAT Reason-
2002). A recent study regarding perseverance in ing Test, which tests students’ subject matter
an EL introductory computer science (CS) class, knowledge in reading, writing, and mathematics
for example, noted that the “dropout rate” (i.e., a (College Board, 2008a) as a standard measure of
student who submitted some work and then either preparedness, the archival study revealed that the
withdrew from the class or failed to take the final SAT score for the students in EL courses (M =
examination) was 42%, a rate much higher than 1097) was comparable to the average FTF score
the 12% and the 26% for the same class, taught (M = 1106). Similarly, the SAT scores for students

Table 1. Computer science courses: An archival analysis of grade point average (GPA)

All CS Courses Total Seats Cumulative GPA Cumulative GPA from Cumulative GPA
Less than 2.0 2.1 to 3.0 from 3.1 to 4.0

Total enrollment 18,022 955 9,872 7,195


Number withdrew 1,9101 185 1, 264 452
Percent withdrew 10.5% 19.4% 12.8% 6.28%
Face to Face Enrollment 15,468 833 8,534 7,110
Number withdrew 1479 142 1009 328
Percent withdrew 9.6% 17% 11.8% 4.6%
ELearning Enrollment 2,554 122 1,338 1,094
Number withdrew 422 43 255 124
Percent withdrew 16.5% 35.2% 19.1% 11.3%

153
Validation of E-Learning Courses in Computer Science and Humanities

who withdrew from the EL courses (M = 1095) EL students who withdrew from their classes,
and students who completed the EL course (M = only 30% repeated those classes in that format.
1099) were also comparable. With the acknowl- Those students then passed at a 57% success rate.
edgement of the College Board (2008b) that a Those 70% who took the course in a FTF mode
range of 30 to 40 points reflects ability, the SAT passed at a similar rate of 55%. However, of the
scores of students who completed the FTF courses 200 students who failed their EL classes, only
(M = 1131) may be understood to be just within 60% of those students were successful in their
the Board’s established range of difference than second EL classes attempt; of the 71% who took
those who withdrew (M = 1098). FTF classes on their next attempt to pass the class,
If the academic preparation was similar in a 84% succeeded.
comparison of EL and FTF undergraduate classes, Such archival analysis problematizes the tra-
the withdrawal rate revealed differences. Overall, ditional academic impulse: admit only students
the withdrawal rates from the EL classes were who have higher GPAs to EL courses. Students in
higher (16.5%) than from the FTF courses (9.6%). both EL and FTF classes were of similar academic
Examination of the GPA of the students revealed ability, and withdrawal rates were similar across
striking differences: for 955 students with GPAs GPAs. Taking a second EL classes after failing
below 2.0, 35.2% withdrew from EL classes, while it resulted in a higher failure rate, while taking
only 17% withdrew from FTF classes. Further the course in a FTF mode increased the rate of
study revealed that 66.7% of students with a GPA success. Indeed, had a policy been designed to
below 1.0 withdrew from E-Learning classes, admit only students with high GPAs—a conclu-
while only 16.7% of that same group withdrew sion warranted by the data—then 39.6% (n = 1012)
from FTF classes; in a similar pattern, 70% of of the 2,554 students in EL classes would have
students with a GPA between 1.1 and 2.0 withdrew been denied enrollment; had the cut off GPA been
from E-Learning classes, while only 21.3% with- increased to 3.0, then 57.2% of the students (n =
drew from FTF classes. These were the highest 1460) would have been denied enrollment. Such
withdrawal rates identified in the study. a policy appears especially problematic when we
For the 9,872 students with GPAs between recognize that the very best students—those with
2.1 and 3.0, 11.8% withdrew from FTF classes, GPAs of 3.5 to 4.0 (n = 2395)—withdrew from
while 19.l% withdrew from EL classes. On the FTF courses at a noticeably lower rate (2.9%) than
highest end of the spectrum, the withdrawal pat- those enrolled in EL classes (7.6%). Clearly, more
terns were retained: for the 7,195 students with large scale studies need to be undertaken regard-
GPAs between 3.1 and 4.0, only 4.6% withdrew ing the potential uniqueness of withdrawal rates,
from FTF classes, while 11.3% withdrew from occurring regardless of student preparation and
EL classes. Further analysis revealed that for the GPA, to the discipline of Computer Science.
very best students—those with GPAs between 3.5
and 4.0, only 2.9% withdrew FTF classes, while Analysis of Undergraduates Enrolled
7.6% withdrew from EL classes. Further analysis in Humanities Courses
using a Pearson correlation revealed a statistically
significant correlation between student GPA and A concurrent study extracted the records of all stu-
grades in both EL (r = .542, p < .01) and FTF dents (n = 384) enrolled in EL humanities courses
(.532, p < .01) CS classes. offered as electives to all undergraduates to fulfill
Were students who withdrew or failed the humanities general university requirement, from
EL classes more likely to pass if they repeated 1994 to 2001. Developed under funding from the
those classes in EL or FTF format? Of the 422 Alfred P. Sloan foundation—a web-based consor-

154
Validation of E-Learning Courses in Computer Science and Humanities

tium that “encourages the collaborative sharing of students in a class taught by those instructors
of knowledge and effective practices to improve most prepared to work within an EL environment.
online education in learning effectiveness, ac- Ironically, the featured class, incorporating those
cess, affordability for learners and providers, and techniques and values most central to asynchro-
student and faculty satisfaction” (SloanC, 2008) nous learning, produced a higher withdrawal rate
—two world literature courses were of special than had the CS courses.
interest. Specifically developed to be offered in Taken together, the large CS archival study and
an EL environment, the courses feature lectures the smaller world literature study are evocative;
by the humanities faculty distributed to students indeed, it may be useful to think of the CS study as
(originally on VHS tape; currently streamed via the larger population that is representative of EL,
the Internet) and asynchronous conferences in with the literature classes part of that larger world.
a WebCT format, with full use of well-articu- The similarities between student performances in
lated assignments leading to group projects and the two worlds are noteworthy. There appears to
researched essays. These courses had reached be little difference in the academic preparedness
their maturity in 2001 and 2002, and the 115 en- of those who elect to take EL classes. Neverthe-
rolled students provided a look at EL at its most less,, even with similar academic preparation,
advanced levels of delivery and student-centered- there are striking differences in the performance
ness. Unlike the majority of computing classes, of the EL students. In the EL classes, regardless
offering content that is quantitative in nature and of disciplines, the perseverance rates are unac-
empirical in origin (making possible widespread ceptably low. In order to understand this outcome,
use of multiple choice exams and other binary- a student survey was developed, with questions
optioned assessment measures), all humanities practical and applied in nature, to allow further
classes anticipated highly subjective responses to inquiry (DeTure, 2004).
complex readings, a process requiring significant
peer-peer and peer-instructor interaction. The
two world literature classes offered heightened Survey Study: One Disciplii
interaction with students.
While the junior-level students were far re- To learn more about the process of student reac-
moved from any relevance to be gained from SAT tion to EL instruction, a survey was designed and
comparisons (and many lacked SAT scores due to administered to students enrolled in humanities
transfer status from two-year colleges), archival classes over a two-semester period. Humanities
analysis demonstrated that the average GPA was classes were selected for survey because they are,
similar between all the EL humanities courses (M at the present writing, designed to follow a best
= 2.85) and the two EL world literature classes (M practices model of EL course delivery (Friedman,
= 2.81). Hence, the academic preparation appears Elliot, & Haggerty, in press).
to be nearly identical across groups. Specifically, the instructional model currently
Nevertheless, as was the case in the CS analy- employed is best described as a socio-technical
sis, retention complexities are apparent. The EL system. In terms of education, Socio-Technical
world literature classes began with a total of 115 Systems (STSs) are computer technologies that
students. Within the first two weeks of the class, 52 enable social interaction of on-line learning,
students withdrew. As the deadline for withdrawal whether conversation (email), group discus-
approached, 26 students more students withdrew. sion (chat), or group writing (wiki). STSs allow
By the end of the class, only 63 students were social networking, as well as collaborative idea
awarded final grades. Thus, there was a 42.5% loss generation and the sharing of knowledge through

155
Validation of E-Learning Courses in Computer Science and Humanities

academic journals (Whitworth, 2006; Whitworth sections), Literature and Medicine (two sections),
& Friedman, 2008). As well, the humanities and Engineering Ethics. Each instructor in each
classes were selected because they are all writ- class had previously taught in an E-Learning
ing intensive in nature. Students participate in environment and had volunteered to teach these
discussions, draft, and submit writing in which classes, electives for all students that satisfy the
referential (comprising scientific, informative, general university requirements.
and exploratory discourse) and persuasive writ-
ing (comprising the systematic application of Demographics
logical models) are both required (Kinneavy,
1971). Furthermore, the courses are informed Nineteen questions on Survey 1 in our study were
by traditional and current directions in writing designed to obtain information about our students.
theory (Flower, 1994; MacArthur, Graham, and While questions of gender, age, and ethnicity were
Fitzgerald, 2006), the ways that writing shapes standard, questions were also asked about the best
and is shaped by cognition (Bazerman, 2008), and language for writing and hours worked per week.
the state-of-the-art methods by which writing is To determine if the students had the technology
assessed (Elliot, Briller, & Joshi, 2007). at hand for these digitally intensive E-Learning
As Langdon Winner (1980) long ago correctly classes, questions were included regarding the
argued, artifacts have politics. While general availability of high-speed connection and com-
principles such as the Sloan Consortium’s cat- puters. Information regarding experiences with
egories for success in on-line learning (Lorenzo previous on-line courses was also requested.
& Moore, 2002) were in mind as the system was Broadly viewed as one of the nation’s most
emerging, we sought to know the specifics of diverse campuses, the NJIT undergraduate stu-
learning effectiveness, student satisfaction, and dents enrollment composition for 2005—the time
access. A model of student learning was there- after which all of the students in the survey were
fore designed in the survey around the variables enrolled—reveals that 34.8% of the students are
of student demographics, learning style, general white, non-Hispanic, 20.2% are Asian/Pacific
expectations, and interaction. Islander, 31.2% are Hispanic, and 10.8% are Black,
During the summer of 2006, Survey 1 (n = non-Hispanic. (14.8% declined to state their
108) yielded a 41.53% return, Survey 2, (n = 89) ethnicity, and 5.9% identified themselves as non-
yielded a 34.23% return, and Survey 3 (adminis- citizens.) 20 % of the 5, 360 undergraduates were
tered after the withdrawal date during the closing women, while 80% of undergraduates are male.
days of the class, n = 62) yielded a 23.85% return. During the summer, 84.3% (n = 91) of the
Surveys were given in classes with the following students reported that their best language for
titles: The Pre-Modern World, the Making of the writing was English (χ2 (1, N = 108) = 50.704, p =
Modern World, World Literature I (2 sections), .01.) as opposed to another language. When asked
World Literature II, Writing about Science, Tech- about the hours they were working either on or off
nology, and Society (two sections), and Esthetics campus, 47.2% (n = 51) reported that they were
and Modern Technology. During the fall of 2006, working 35 or more hours per week, while only
Survey 1 (n = 68) yielded a 50.75% return, Survey 13.9% (n = 15) reported that they were working
2 (n = 58) yielded a 43.28% return, and Survey 3 less than 5 hours per week. When asked if they
(administered after the withdrawal date during used a DSL or cable connection when engaged in
the closing days of the class, n = 58) yielded a online learning, 94.4% (n = 102) reported that they
52.25% return. Surveys were given in classes did (χ2 (1, N = 108) = 85.333, p = .01.); asked if a
with the following titles: Technical Writing (two computer was reserved primarily or exclusively

156
Validation of E-Learning Courses in Computer Science and Humanities

for their use, 89.9% (n = 97) replied that there by the end of the semester, 111 names appeared on
was (χ2 (1, N = 108) = 156.167, p = .01.). Asked if the final roster—a 17.16% loss of available seats
this was their first on-line learning class, 39.8% due to the presence of shoppers. With 47% of
of the students replied that it was, while 32.4% students working nearly full-time in the summer
(n = 35) replied that they had enrolled in two or and 28% working that same amount in the fall, it
fewer classes; only 27.8% (n = 30) of the students is easy to see how the convenience of asynchro-
were experienced on-line learners, having taken nous access can be confused with ease of course
three or more such classes. content (Williams & Hellman, 2004).
During the fall, 77.9% of the students (n = 53)
reported that their best language for writing was Learning Style
English (χ2 (1, N = 68) = 21.235, p = .01.). When
asked about the hours they worked, 27.9% (n = 19) To provide information about learning styles,
reported that they were working 35 or more hours the survey included nine questions that asked
per week, while 27.9% (n = 19) reported that they students to select between viewing the instructor
were working less than 5 hours per week. When as the class expert or the class facilitator, along
asked if they used high-speed connections, 97.1% with questions on motivation and preferences in
(n = 66) replied that they did (χ2 (1, N = 68) = deadlines. Analysis of survey 3, the summative
60.235, p = .01.); and 97.1% (n = 66) replied that summer set of responses, revealed that 62.9%
they had a computer reserved for their use (χ2 2(1, of the responding students felt that they wanted
N = 68) = 60.235, p = .01.). Twenty five percent (n the instructor to serve as an expert, rather than a
= 17) of the students replied that this was their first facilitator, in class. That response lessened during
experience with on-line learning, while 33.8% (n the longer fall term, yet 58.6% remained steadfast
= 23) had enrolled in two or fewer classes. Forty in their views that the instructor should serve
one percent (n = 28) of the students reported that as expert, not facilitator. Regarding motivation,
they had taken three or more on-line classes. however, students did not feel that the instruc-
Among the most striking results associated tor was the greatest motivator in class. In the
with the demographic portion of the survey were summer 61% said that their own self-motivation
the perseverance rates revealed by the WebCT’s was most important, followed by the instructor
“Quizzes/Surveys” tool. The tool captures each (32.3%), with fellow classmates placing a distant
student enrolled in the class at the time the survey third (6.5%). A similar set of responses was given
was taken and maintains the students’ names in in the fall: self-motivation (67.2%), the instructor
the database. Hence, the tool records students who (29.3%), and classmates (3.4%). Overwhelmingly,
enroll yet withdraw before the registrar’s deadline. students preferred hard deadlines over no deadline
These students may be termed “shoppers,” as they for work to be submitted. The preference for hard
take up seats in the E-Learning classes beyond deadlines in the summer (79%) was reflected again
the deadline to drop or add a class, preventing in the fall survey (70.7%).
other students from enrolling.
During the summer semester 260 names General Expectations
appeared for the first survey; by the end of the
semester, only 176 of these names appeared on the While the survey included seventeen questions
roster. Thus, 84 students—47.72 % of the avail- asking students to identify the degree of difficulty
able seats—were taken by those whose names of the class, the amount of time, and the degree
never appeared on the final roster. During the fall of engagement required for success, the survey
semester, 134 names appeared on the first survey; also asked if the students found the writing de-

157
Validation of E-Learning Courses in Computer Science and Humanities

mands of E-Learning courses different than those scale constructed according to a balanced model
encountered in face-to-face classes. in which the labels formed an equal interval con-
During the summer, 91.9% of the students tinuum bounded by opposite poles with a neutral
reported that they felt that the EL class was either midpoint (Dunham & Davidson, 1991), 65.5%
the same as, or harder than, a FTF class, and in of the summer students reported that they either
the fall 82.6% of the students reported the same very strongly agreed or strongly agreed that the
level of difficulty. In the summer, 92.9% of the instructor was an important part of the class (M =
students reported that the class required either the 4.98, SD = .98). In the fall, 51.7% of the students
same or more time than an FTF class, and 86.3% reached the same conclusion (M = 4.62, SD =
of the fall students reported the same level of time 1.14). In the summer, 37.1% of the students agreed
required; indeed, in the spring, 64.4% reported with the statement that collaborative classmate
that more time was needed, and in the fall 50% interaction was an important part of the class (M
reported the same. Again, the survey proved = 4.16, SD = 1.27), while in the fall 44.8% of the
sensitive to the time of class; in the summer, students agreed with this statement (M = 4.1, SD =
54.4% of the students reported that the course 1.22). Overwhelmingly, students reported that the
materials needed to be engaged four or five days digital video material was either easy or foolproof
a week, while in the fall, 50% of the students (89.1% in the summer; 74.1% in the fall). A similar
reported that the materials needed to be engaged finding was true of the on-line reading material,
two or three times a week. When that engage- stored both in the course management system
ment occurred during the summer, 37.1% of the and the university’s digital databases: 95.3% of
students reported that the engagement required summer students reported that access was either
two to three hours of work, and 61.3% reported easy or foolproof, and 89.7% of the fall students
that more than 3 hours were required; in the fall, reported the same levels of access.
25.9% of the students reported that between one
and two hours were required, 39.9% reported Learning Style and General
that between two to three hours were required, Epectations Variable Interactions
and 32.8% reported that three or more hours of
engagement were required. During the summer, The relationships between learning style and
82.3% of the students reported that their attention general expectations would, we hoped, reveal
was either focused or completely focused, while more about the variables of E-Learning in under-
in the fall 75.9% of the students reported similar graduate humanities as they were hosted within
levels of attention. Regarding the level of writing our STS framework. Studies on general expec-
time, 66.1% of the summer students reported that tations within an E-Learning environment have
the E-Learning class required more writing time, also become central to E-Learning research. As
and 62.1% of the fall students reported that more enrollments have continued to grow—the National
writing time was required. Center for Education Statistics (2003) has reported
that during the 12-month 2000–2001 academic
Interaction year, 56 percent (2,320) of all 2-year and 4-year
granting institutions offered distance education
Nine questions were designed to gather informa- courses, with an enrollment of 3,077,000 during
tion about the importance of interaction with the that period—topics such as instructor prepared-
instructor, with classmates, with the digital video ness have come to the forefront of research. Shav-
material, and with the on-line reading material. elson & Huang (2003), for example, found that
Two of the questions employed a six-point Likert many universities lack faculty who are properly

158
Validation of E-Learning Courses in Computer Science and Humanities

trained for creating and delivering online classes, found, capability beliefs, a responsive environ-
resulting in lower student success rates. ment, goal-oriented curriculum, mutual respect,
Elements for success identified by Frey, Faul & enthusiasm, and diplomacy are strong factors of
Yankelov (2003) include online posting of grades, student completion.
sufficiently detailed and accurate lecture notes, In the context of the success factors above, scale
and well-defined guidelines on how to finish construction proved promising. Survey 3, given
the assignments, as well as consistent, constant at the end of the classes, was used to construct
interaction with the instructor. Such contact and the scale. Questions were grouped and combined
interaction can take the form of virtual office hours to produce the scale and subscale. Questions of
and phone availability, lecturer clarity and depend- interest, including the ones discussed above, were
ability, according to Memon, Shih & Thinger combined to produce The Learning Style Scale.
(2006). In terms of behaviors that foster success, The General Expectations Scale also employed
Ley & Young (2001) find that self-regulation is key questions, and two sub-scales were devised,
an essential component. This research is comple- focusing on E-Learning versus face-to-face
mentary to Dabbagh & Kitsantas (2005), who comparisons and on comparative issues of time.
suggest that instructors spend more time building The Interaction Scale was based, as well, on the
up and designing classes that actually promote questions of interest discussed above. If relation-
self-regulation. In their comparative study, Leners ships were identified among the scales, then a
& Sitzman (2006) find that caring in face-to-face coherent model could be developed. Overall, as
classrooms is experienced through voice, body Table 2 demonstrates, the internal consistencies
language, facial expressions, and behaviors that of the scales were essentially strong, ranging
translate, according to Dillon & Stines (1996), to from .698 to .787. The relationship of all scales
the E-Learning classroom through means of an to subscales, as measured by Cronbach’s alpha,
empathetic perspective, timeliness of communi- was .549 during the summer semester and .515
cations, and a tone of appreciation. This finding in the fall semester.
is in accord with Simonson (1996), who stresses In the summer, the General Expectations Scale
that the relationship between the teacher and and its Subscale 1 demonstrated a high Pearson
student may also matter in the student’s success. product moment correlation and high significance
For Cereijo (2006), a similarly optimistic attitude (summer, r = .759, p < .01, r = .797, p < .01). In
and a strong willpower is required for student the fall, the General Expectations Scale and both
success, which is facilitated by self-discipline Subscales 1 and 2 demonstrated similarly high
and enthusiasm reinforced by faculty-student correlations and high significance (fall, r = .833,
interactions. Generally, as Jamison (2003) has p < .01, r = .858, p < .01, r = .431, p < .01). The

Table 2. Cronbach’s Alpha correlations for learning style, general expectations, and interaction
scales
Summer 2006 (n = 62) / Fall 2006 (n = 58)
1. Learning Style (variable with 6 questions) .73 .741
2. General Expectations (variable with 4 questions on E-Learning vs. FTF comparison and 3 questions .698 .735

on time)
3. General Expectations Subscale 1: E-Learning vs. FTF (variable with 4 questions) .764 .787
4. General Expectations Subscale 2: Time (variable with 3 questions) .74 .758
5. Interaction (variable with 4 questions) .744 .719

159
Validation of E-Learning Courses in Computer Science and Humanities

key to these strong within-scale relationships scale (A = 9, B+ = 8, B = 7, C+ = 6, C = 5, D = 4,


is to be found in the writing intensive nature of W = 3, I = 2, F = 1). While we could not match
the E-Learning classes. As the high correlations survey to student, we nevertheless knew who had
between the General Expectations Scale and its completed Survey 3 and, thus, could obtain their
two Subscales demonstrate, it is justified to think final grade from the class roster. In the summer
about student writing as being the cohesive vehicle semester, these students (n = 62) expected a final
by which the STS framework is mediated. grade with a mean of 7.23 (SD = 2.13). The actual
However, while the General Expectations final grade mean of those students was 7.58 (SD =
proved cohesive, no similar cohesiveness was 1.4). The expected and final grades were correlated
demonstrated among the scales. Only a single (.286, p < .05), although there was no statistically
statistically significant positive correlation different difference (t = 42.36(61), p < .01). In the
(.394, p < .01) was found in the summer semester fall semester, students who completed survey 3 (n
between the Interaction Scale and the General = 58) expected a final grade with a mean of 7.74
Expectation Scale; no other statistically significant (SD = 12.9). The actual final grade mean of those
positive correlations were observed. Of interest students was 7.63 (SD = 1.6). Again, the expected
is the statistically significant negative correla- and final grades were correlated (.286, p < .05),
tion observed in the summer semester between although there was no statistically different dif-
the Learning Style Scale and the Interaction
ference (t = 36.45(58), p < .001). Using question 21
Scale (r = -.394, p < .01). A hint as to the cause
as a proxy performance-based dependent variable
of this finding is found in comparison of ques-
appeared conceptually warranted because of the
tions regarding the significance of the instructor
traditional use of course grades as a performance
and the role of motivation. During the summer
measure and because of the correlations between
semester, only 32% of the students held that the
instructor was the greatest motivation, and 29% expected and final grades.
of the fall students answered in the same fashion. The regression model, with the Learning
Yet 94% of the students agreed, strongly agreed, Scale, the General Expectations Scale (and its
or very strongly agreed that interaction with the two Subscales), and the Interaction Scale serving
instructor was an important part of the course in as the independent variables, was not statistically
the summer, and 89% of the fall students reported significant (R2 = .12 F(4, 56) = 1.97, p = .112), with
the same perception. The Learning Style Scale, only 12% of the variability of the expected grade
thus, appears to be reporting a very different representing the proportion of variance explained
set of beliefs than the Interaction Scale: while by the interaction of the independent variables.
learning style may be varied—students were Similar results were seen in a regression analysis
split on the issue of instructor-centered versus of the fall semester (R2 = .17, F(5, 51) = 2.092,
learner-centered approaches—interaction with p = .1). Clearly, there were factors influencing
the instructor remains the most important factor the model that were not explained by the usual
in E-Learning classes (Andrysyszn, Cragg, & criterion variable of the final grade.
Humbert, 2001; Swan et al 2001; Paloff & Pratt, A dependent variable constructed was based
2003; Wallace, 2005). on Type II learning styles—those expressing a
preference for structure, cognitive simplicity, and
Gyles as conformity (Zhang & Sternberg, 2005, 2006).
Models Using these questions as a constructed variable,
we performed a regression analysis—with the
Question 21 of Survey 3 asked students to iden- General Expectations Scale (and its two Sub-
tify their expected grade in the class on a 9-point scales), and the Interaction Scale serving as the

160
Validation of E-Learning Courses in Computer Science and Humanities

independent variables—for the fall semester with Information Literacy :


the following results: R2 = .227, F(4, 52) = 3.816, A Laent Variaia
p < .01. The level of significance was high, with
23% of the variance in the Type II dependent Borsboom, Mellenbergh, & van Heerden (2003)
variable represented by the interaction of the correctly argue that a “realist account of the latent
independent variables. variable is required to maintain a consistent con-
This finding is among the most important of nection between the formal and empirical con-
this third study. We offer three reasons for the cept” of the third variable (p. 204). Conceptually
significance of this finding. First, an outcome vari- following these researchers, we seek to answer
able capturing a structured type of learning style the relevant research question: Are there latent
provides a cohesive model for the STS framework variable processes that generate the behaviors we
examined in this study. This is not to say that are observing in our E-Learning studies? As the
grades, the universal proxy for performance, do above analyses demonstrate, these behaviors may
not matter, but it appears as if the construct of be summarized as follows: a very diverse group
structure is more important to the present model’s of CS undergraduate students with solid SAT
coherence (Felder & Brent, 1996). Second, struc- Reasoning Test scores appear to be withdrawing
ture appears to be dependent on factors attributed from E-Learning courses at unacceptable rates
to the instructor. The syllabus, class materials, that increase regardless of high or low GPAs.
assignment content, and grading criteria—all When sub-groups are examined in writing inten-
provided by the instructor—are overwhelmingly sive E-Learning courses, these withdrawal rates
found by the students to be adhered to, on hand, increase. These students report no difficulty with
known, and articulated. As other researchers their English language proficiently, nor do they
have found, the role of the instructor is key to report difficulties with the technological aspects
E-Learning (Andrysyszn, Cragg, & Humbert, of course management systems. While they be-
2001; Swan et al, 2001; Paloff & Pratt, 2003; Wal- lieve that their own motivation is important, they
lace, 2005). Third, in that these are humanities also report, in seeming contradiction, that the
classes under investigation, it might be assumed instructor’s presence in very important. When
that an enrolled student might prefer reflective outcomes models are developed, course grade
observation and active experimentation. In fact, does not appear to be as important as learning
however, the students might be said to prefer con- styles. Yet, although the new media’s intersection
crete experience and abstract conceptualization. with writing is frequently described as part of
As students in a technological university, their the process of electracy (Ulmer, 2003)—a term
learning styles may—or may not—vary according describing rhetorical stances of chora, approba-
to subject matter and may be mediated by both tion, juxtaposition, commutation, nonlinearity,
the STS framework (incorporating structure) and and imagery (Rice, 2007)—the student learning
the humanities instructors (fostering divergent styles suggest they are anelectrate in their desire
thinking) (Sahin, 2008). Hence, with Zhang & for structure, cognitive simplicity, and conformity.
Sternberg (2005) we believe that an orientation Are there individual processes, we must then ask,
toward learning styles should be based not on that exist as third variables involved in E-Learn-
rigid categories but on individual differences as ing that, if identified, will help us to frame the
they are found in context. In sum: learning styles often contradictory relationships we are observ-
matter and must be considered as they emerge ing? Is there, as Sanford, Lajbcygier, and Spratt
within contexts in which structure, provided by have proposed in this book (Chapter 8), a priori
the instructor, rests at the center of the E-Learn- background knowledge that students bring, or do
ing environment. not bring, to the E-Learning tasks at hand?

161
Validation of E-Learning Courses in Computer Science and Humanities

In the fall of 2004, librarians and faculty at information literacy skills as they appear in
NJIT began a formal investigation of the infor- technological environments (Katz, 2007b). The in-
mation literacy skills of undergraduate students. formation and communication technology skills,
Working with specialists in research and infor- complementary to the NJIT Literacy Scales, were
mation literacy at the university’s Robert Van designed assess student ability to appropriately use
Houten Library, instructors in the department digital technology, communication tools, and/or
of humanities worked to design an information networks to solve information problems in order
literacy model based on standards derived from to function in an information society. This student
the Association of College & Research Libraries ability includes employing information as a tool to
(ACRL). In that the faculty had been assessing research, organize, and communicate information
the writing skills of students enrolled in general and having a fundamental understanding of the
undergraduate requirements (GUR) in humani- ethical/legal issues surrounding accessing and
ties since 1996, a traditional portfolio assessment using information. Key to the ETS assessment is
system had emerged that allowed reliable and valid its Internet-delivered, performance-based assess-
programmatic information to be gained about ment basis. Assessment administration takes ap-
student writing (Elliot, Briller, & Joshi, 2007). proximately 75 minutes, divided into two sections
A new portfolio assessment system launched lasting 35 and 40 minutes, respectively. During
in spring 2005—termed the NJIT Information this time, students respond to 15 interactive tasks
Literacy Scale (ILS)—shifted the assessment comprising a real-world scenario, such as a class
focus from writing to information literacy as- or work assignment, that frames the information
sessment (Scharf, Elliot, Huey, Briller, & Joshi, task. Students solve the tasks in the context of a
2007). The NJIT ILS was designed to investigate simulation (for example, e-mail, Web browser,
student ability to identify, find, understand, or library database) having the look and feel of
and use information in drafting, revising, and typical applications.
finalizing researched, persuasive writing. While Thus, the study embodied two elements central
allowing similarly strong validity evidence to to E-Learning within our specific institutional
be warranted as the original portfolio system, site: the use of writing as a process to deepen
the information literacy scores were lower than knowledge, and the use of computer-mediated
anticipated, documenting marginal to unaccept- environments to deliver content information. If
able levels of student information literacy skills. students could demonstrate proficiency in these
Instructional and library faculty were interested information literacy measures, we would be able
in learning more about the information literacy to rule out the presence of a latent variable in
skills of their students. the E-Learning process; that is, students were
In fall 2005, NJIT and the Educational Testing able to demonstrate, in FTF settings (where tra-
Service (ETS) undertook a collaborative research ditional instructional means were in place) that
agreement to investigate more fully—by means they possessed the skills necessary to retrieve
of multiple approaches—the variables of infor- and integrate information from divergent sources
mation literacy as they were evidenced within into their work. As well, if students were able to
student performance at a public comprehensive demonstrate, in the iSkills simulation, that they
technological university (Katz, et al., 2008). The could research, organize, and communicate such
collaboration would bring together the portfo- information in a timed setting, then the essential
lio-based assessment approach of NJIT with the skills necessary for success in E-Learning could
performance-based, automatically scored iSkills be demonstrated.
assessment, which was designed to measure

162
Validation of E-Learning Courses in Computer Science and Humanities

While first-year students were assessed, these 400 to 700, the students in the study performed at
students do not take asynchronously offered a level similar to that reported on a national level.
E-Learning courses. For the present purposes, Of 3,571 students tested on the Advanced iSkills
information gained from the sophomores, juniors, test between January 2006 and May 2007, a score
and seniors is most relevant. A simple random of 555 (SD = 32) was the reported as the average.
sample of upper-division students was created That is, students were only able to answer 55% of
across each section of two representative humani- the questions correctly. Nationally, across several
ties writing classes: cultural history and technical tasks, few test takers could adapt material for a new
writing. These students, along with students from audience. In a web search task, only 40% entered
the senior seminar who were selected as described multiple search terms to narrow the results. When
below, were identified for portfolio submission. asked to organize a large amount of information
The senior seminar students consisted of a census efficiently, more than half the students failed to
of all whose transcripts revealed that they had sort the information to clarify related material.
never taken any class outside of NJIT; hence, these When searching a large database, only 50% of test
students, while small in number, represented a takers used a strategy that minimized irrelevant
meaningful population of NJIT students. Overall, results (Katz, 2007a).
students were found in cultural history (n = 95), Regarding the NJIT scale, it is clear that much
as well as in technical writing (n = 48) and the remains to be done in helping students integrate
senior seminars (n = 33). The sample resulting sources meaningfully into their essays. While
from the sampling plan closely matched the NJIT students in cultural history could achieve basic
student population. Students were tested in a documentation forms—almost universally those
proctored computer lab on the iSkills assessment advocated by the Modern Language Association
in late March 2006 and in May 2006 portfolios (Gibaldi, 2003)—they had problems demonstrat-
of targeted students were evaluated according to ing that they had provided sources beyond those
the NJIT ILS. Table 3 presents the results of the noted in the syllabus (evidence of research),
ETS / NJIT collaborative research. identifying sources germane to their topic (in-
The scores for students on the iSkills Advanced tegration), and achieving an overall sense of a
Test enrolled in cultural history (M = 548.5, SD researched essay (holistic score). In that a score
= 36.9), technical writing (M = 547.2, SD = 39.5), below 7 is taken at NJIT to be a demonstration
and the senior seminars (M = 568.3, SD = 28.2) of weak performance, it appeared that only the
were, at best, average. With a score range from citation variable—a mechanistic adherence to

Table 3. Information literacy scores: Advanced level ETS iskills assessment and NJIT information literacy
scale (means and standard deviations)

Cultural History Technical Writing Senior Seminar


(n=95) (n=48) (n=33)
ETS Advanced iSkills Score 548.5 (36.9) 547.2 (39.5) 568.3 (28.2)
NJIT Citation 7.8 (2.8) 5.3 (2.7) 8.3 (2.1)
NJIT Evidence of Research 6.9 (2.8) 5.8 (2.49) 6.8 (2.5)
NJIT Appropriateness 6.7 (2.7) n/a 7.2 (2.3)
NJIT Integration 6.5 (2.7) n/a 6.7 (2.3)
NJIT Holistic Score 6.8 (2.4) n/a 7.0 (2.2)

163
Validation of E-Learning Courses in Computer Science and Humanities

citation format—could be taken as a measure of transitive verb suggesting recurrent process) has
competency. been epitomized in Michael T. Kane’s conclu-
The technical writing students’ abilities to sion about the nature of evidence in assessment
cite sources and provide evidence of independent contexts: “Validation has a contingent charac-
research were profoundly weak. Although only ter; the evidence required to justify a proposed
these two variables were assessed among other interpretation or use depends on the proposed
variables important to technical writing teachers, interpretation or use” (2006, p. 60). With validity
it is distressing that the lowest performance on the now viewed as a series of integrated judgments
NJIT scales was identified in this group of students. (Messick, 1989; AERA, APA, & NCME, 1999;
Less distressing, however, were the scores of the Brennan, 2006), context-specific arguments
senior seminar students. Scharf, et al. (2007), in- (Mislevy & Brennan, 2006; Mislevy, 2007) and
vestigating a similar group of students at NJIT (n procedurally-dependent observations resembling
= 100), found ILS scores below the cut score of 7 science itself (Embretson, 2007), a new world of
on each of variables that make up the component evaluation is before us.
ILS variable used in the current study. One year The three kinds of studies performed in this
later, the citation, appropriateness, and holistic study—archival, process, and outcomes—serve
scores had met the competency score of 7. to help us better understand the complexities in-
Surely, the skills addressed by both the ETS volved in the assessment of E-Learning. Indeed,
and NJIT measures are integral to successful any one of the methods may not, in itself, prove
E-Learning. If, as the ETS research confirms, sufficient: trend data, student response, and stu-
the vast majority of students are performing dent performance (Lane & Stone, 2006) should
poorly on the iSkills measure, then we must each be taken into account when effectiveness
wonder about the impact of this lack of skill on studies are to be designed, conducted, and reported
E-Learning students. As well, if students in writ- (Watkins & Corry, 2005).
ing-intensive classes are not able to cite sources, As our three studies demonstrate, there are
extend searches beyond a syllabus, identify ap- complex variables, often contradictory, in play
propriate sources, and integrate information into within a specific institutional setting. It is, there-
their work—processes concurrent with drafting, fore, difficult indeed to come to conclusions about
revising, and finalizing researched, persuasive what must be done regarding the success of any
writing—then these students will surely either system. The kinds of tasks required for general
fail in E-Learning classes requiring such skills, E-Learning success at NJIT may be, for instance,
or they will withdraw at high levels. It may indeed similar to those found in the computer-mediated
be the case that the essential E-Learning skills iSkills assessment. Yet those skills may not be
are not there in the first place. sufficient to ensure success in specific classes,
such as world literature, that are writing intensive.
If students in FTF classes can demonstrate, at the
Di very end of a class, only marginal skills, then how
can similar students be expected to demonstrate
As Markham and Hurst have reminded us in those skills in the already complex asynchronous
Chapter I, new conceptualizations of the process environment?
of gathering validity evidence are now emerging To end with a case in point regarding the
(Kane, 2006; Dialogue on Validity, 2007). The complexities of assessing E-Learning, consider-
present shift from the term “validity” (a noun ation of the presence of cognitive complexity is
suggesting a concretized fact) to “validate” (a in order. Writing assignments are socio-cogni-

164
Validation of E-Learning Courses in Computer Science and Humanities

tive activities that are congruent with many of revisions based on that review. If, however, the
the best practice values of E-Learning. As they recent results of the iSkills assessment are taken
write, students participate in discussions, draft, into account—less than half the tested students
and submit assignments in which referential (com- could enter multiple search terms to narrow the
prising scientific, informative, and exploratory results, more than half the students failed to sort
discourse) and persuasive writing (comprising the information to clarify related material, and
the systematic application of logical models) are only half of the test takers used a strategy that
both required (Kinneavy, 1971). At their best, minimized irrelevant results—then the essential
writing tasks are informed by traditional and skills are lacking in the first place. Never mind
current directions in writing theory (MacArthur, about grammar and mechanics, about essay orga-
Graham, & Fitzgerald, 2006), demonstrate both nization and writer’s aim, about documentation
the ways that writing shapes and is shaped by and editing; the assignment breaks down at the
cognition (Bazerman, 2008), and the state-of-the- initial point of researched writing: the database
art methods by which writing is assessed (Elliot, search itself.
Briller & Joshi, 2007). As well, writing serves
to lessen transactional distance. As proposed by
Moore (1973; Moore, 2007), activities yielding Future Trends
high transactional distance, such as reading a
textbook, afford little dialogue and less com- Although the three studies presented in this
munication; activities yielding low transactional chapter offer evidence obtained from recent
distance, such as independent studies, yield highly experiences with E-Learning at NJIT, it is clear
individualized communication. In the writing- to us that the resulting implications affect all
enhanced E-Learning environment, all transac- contemporary learning, regardless of its form.
tions are writing-intensive, thus allowing for the The rapid integration of course management
desired low transactional distance. Hence, student tools and asynchronous instruction, originally
writing is seen as the vehicle by which the STS deemed exclusive to EL, into FTF teaching has
framework is mediated (Bolter & Grusin, 2000) blurred the distinction between the two modes
into an individualistic, student-centered context of learning. In fact, the amalgamation of the two
allowing low transactional distance—and, thus, communities is becoming the norm. Furthermore,
greater communication. In that student writing ubiquitous technologies, both wearable and wire-
plays such a central role in class design, the con- less, promptly embraced by this generation of
cept of lessening transactional distance through learners, are assertively shifting campus interac-
writing tasks should be of interest to all E-Learn- tion from exclusively physical locations to their
ing instructors. digital alternatives. It is not difficult to imagine
Specifically, a student in a writing-intensive the emergent applications of this evolving “smart
E-Learning class would likely be involved in the campus” to education (Jones & Grandhi, 2005).
following potential case: planning a researched Thus, protesting EL and issuing calls for limit-
essay on, for example, the measurement of hu- ing its enrollment, which were unsuccessful in
man intelligence, searching databases such as the past, are even more unrealistic now. Instead,
Academic Search Premiere and JSTOR to find technological and pedagogical policies need
the very best articles, reading and synthesizing to converge in order to enhance the quality of
those identified sources, documenting them in student learning and ensure that the literacy and
a standardized format, drafting and submit- skills “needed to fully participate in an increas-
ting the work for review, and then making final ingly competitive work environment” are learned

165
Validation of E-Learning Courses in Computer Science and Humanities

(Kirsch, Braun, Yamamoto, & Sum, 2007), Aragon, S. R., Johnson, S. D., & Shaik, N. (2002).
whether in physical or virtual settings. The influence of learning style preferences on
student success in online versus face-to-face
environments. American Journal of Distance
C Education, 16, 227-244.
Bazerman, C. (Ed.). (2008). Handbook of research
Regardless of SAT score or GPA, students simply
on writing: History, society, school, individual,
lack the fundamental skills to deal with the new
text. New York: Earlbaum.
media. Alone, this fact helps us come to terms with
the stunning withdrawal rates and the conflicted Bolter, J. D., & Grusin, R. (2000). Remediation:
survey information presented in this chapter. Understanding the new media. Cambridge: MA,
Without the critical skills of identifying, find- MIT Press.
ing, understanding, and using information—as
Borsboom, D., Mellenbergh, G. J., & van Heerden,
well as rapidly and effectively researching and
J. (2003). The theoretical status of latent variables.
organizing information responsibly—the student
Psychological Review, 110, 203-219.
will certainly be lost upon point of entry – in class
and in the information workplace. Our attempts to Brennan, R. L. (2006). Perspectives on the evolu-
validate our processes may thus achieve method- tion and future of educational measurement. In R.
ological soundness but fail to provide directions L. Brennan (Ed.). Educational measurement (4th
to deal with this emerging context. As our notions edition, pp. 1-16). Westport, CT: Praeger.
of validity are presently being re-imagined to ac-
College Board. (2008a). SAT Reasoning Test.
knowledge process, context, and contingency, it is
Retrieved October 4, 2008, from http://www.col-
perhaps helpful to imagine E-Learning itself as an
legeboard.com/student/testing/sat/about/SATI.
emerging construct, one that mirrors the realities
html
of students’ environments far more than it does
the protocols of brick and mortar universities. In College Board. (2008b). Score range. Retrieved
this view, there is much yet to learn about this October 4, 2008, from http://www.collegeboard.
critical socio-technical system and the contexts com/student/testing/sat/scores/understanding/
in which it operates. scorerange.html.
Cereijo, M. V. P. (2006). Attitude as predictor of
success in online training. International Journal
REFERENCES
on E-Learning, 5, 623-639.
American Educational Research Association, Dabbagh, N., & Kitsantas, A. (2005). The role of
American Psychological Association, & National web-based pedagogical tools in supporting student
Council on Measurement in Education. (1999). self-regulation in distributed learning environ-
Standards for educational and psychological ments. Instructional Science, 25, 24-37.
testing. Washington, DC: American Psychologi-
DeTure, M. (2004). Cognitive style and self-ef-
cal Association.
ficacy: Predicting student success in online dis-
Andrysyszyn, M. A., Cragg, B., & Humbert, J. tance education. American Journal of Distance
(2001). Nurse practitioner preferences for distance Education, 18, 21-38.
education methods related to learning style, course
Dialogue on Validity [Special issue]. 2007. Edu-
content and achievement. Journal of Nursing
cational Researcher, 36(8).
Education, 40, 163- 170.

166
Validation of E-Learning Courses in Computer Science and Humanities

Dillon, R. S., & Stines, R. W. (1996). A phe- Unpacking the variables. International Journal
nomenological study of faculty-student caring on E-Learning.
interactions. Journal of Nursing Education, 35,
Frey, A., Faul, A., & Yankelov, P. (2003). Student
113-118.
strategies of web-assisted teaching strategies.
Dunham, T. C., & Davidson, M. L. (1991). Effect Journal of Social Work Education, 39, 443-457.
of scale anchors on student rating of instructors.
Gibaldi, J. (2003). MLA handbook for writers of
Applied Measurement in Education, 4, 23-35.
research papers. (6th ed.). New York: Modern
Elliot, N., Briller, V., & Joshi, K. (2007). Portfo- Language Association.
lio assessment: Quantification and community.
Hiltz, R. S., & Turoff, M. (1993). Network nation
Journal of Writing Assessment, 3, 5-29.
- revised edition: Human communication via
Elliot, N., Friedman, R. S., & Briller, V. (2005, computer. Cambridge, MA: MIT Press.
June-July). Irony and asychronicity: Interpret-
Institute for Higher Education Policy. (1999). A
ing withdrawal rates in E-Learning courses.
review of contemporary research on the effec-
Paper presented at the meeting of the Ed-Media
tiveness of distance learning in higher educa-
World Conference on Educational Multimedia,
tion. Washington, DC: The Institute for Higher
Hypermedia, and Telecommunications, Montreal,
Education Research.
Canada.
Jamison, T. M. (2003). Ebb from the web: Using
Elwert, B., & Hitch, L. (Eds.). (2002). Motivating
motivational systems theory to predict student
and retaining adult learners online. Essex Junc-
completion of asynchronous web-based distance
tion, Vt.: GetEducated.com.
education courses. Unpublished doctoral disserta-
Embretson, S. E. (2007). Construct validity: A tion, George Mason University.
universal validity system or just another test
Jones, Q., & Grandhi, S. (2005). P3 systems:
evaluation procedure? Educational Researcher,
putting the place back into social networks. IEEE
36, 449-455.
Internet Computing, 9, 38-46.
Felder, R. M., & Brent, R. (1996). Navigating
Katz, I. R. (2007a, June). Digital natives but not
the bumpy road to student-centered instruction.
information fluent: Results from ETS’s iSkills
College Teaching, 44, 43-47.
assessment. Presented at the Alliance for a Media
Flower, L. (1994). The construction of negotiated Literate American Research Summit, St. Louis,
meaning: A social cognitive theory of writing. MO.
Carbondale: Southern Illinois University Press.
Katz, I. R. (2007b). Testing information literacy
Foster, L., Bower, B. L., & Watson, L. W. (Eds.). in digital environments: ETS’s iSkills™ assess-
(2002). Western cooperative for educational tele- ment. Information Technology and Libraries,
communications. Best practices for electronically 26(3), 3–12.
offered degree and certificate programs. ASHE
Katz, I. R., Elliot, N., Attali, Y., Scharf, D., Pow-
Reader: Distance Education: Teaching and Learn-
ers, D., Huey, H., Joshi, K., & Briller, V. (2008).
ing in Higher Education. Boston: Pearson.
The assessment of information literacy: A case
Friedman R., Elliot, N, & Haggerty, B. (in press). study (ETS Research Rpt. No 08-33.) Princeton,
E-Learning in undergraduate humanities classes: NJ: Educational Testing Service.

167
Validation of E-Learning Courses in Computer Science and Humanities

Kane, M. T. (2006) Validation. In R. L. Brennan Millwood, R., & Terrell, I. (2005). Overview: New
(Ed.), Educational measurement (4th edition, technology, learning and assessment in higher
pp. 17-64). Westport, CT: American Council on education. Innovations in Education & Teaching
Education/Praeger. International, 42, 195-204.
Kinneavy, J. L. (1971). A theory of discourse. Mislevy, R. J., & Brennan, R. L. (2006).Cognitive
New York: Norton. psychology and educational assessment. In R. L.
Brennan (Ed.). Educational measurement (4th
Kirsch, I., Braun, H., Yamamoto, K., & Sum, A.
edition, pp. 257-305). Westport, CT: Praeger.
(2007). America’s perfect storm: Three forces
changing our nation’s future. New Jersey: Edu- Mislevy, R. J. (2007). Validity by design. Educa-
cational Testing Service. tional Researcher 36, 463-469.
Lane, S., & Stone, C. A. (2006). Performance Mock, K. (2003). The development of a CSO
assessment. In R. L. Brennan (Ed.), Educational course for distance delivery. The Journal of Com-
measurement (4th edition, pp. 387-431). Westport, puting Sciences in Colleges, 19(2), 30-38.
CT: American Council on Education/Praeger.
Moore, M. G. (1973). Toward a theory of inde-
Leners, D. W., & Sitzman, K. (2006). Graduate pendent teaching and learning. The Journal of
student perceptions: Feeling the passion of car- Higher Education, 44, 661-679.
ing online. Nursing Education Perspectives, 27,
Moore, M. G. (2007). The theory of transactional
315-319.
distance. In M. G. Moore (Ed.), Handbook of
Ley, K., & Young, D. B. (2001). Instructional prin- Distance Education (2nd edition, pp. 89-105).
ciples for self-regulation. Educational Technology New Jersey: Earlbaum.
Research and Development, 49, 93–103.
National Center for Education Statistics (2003).
Lorenzo, J., & Moore, J. (2002). The Sloan Con- Distance learning in higher education institu-
sortium report to the nation: Five pillars of quality tions: 2000-2001. (Rep. No. NCES 2003-017).
online education. Retrieved from http://www. Washington, DC: National Center for Education
sloan-c.org/effectivepractices/pillarreport1.pdf Statistics.
MacArthur, C. A., Graham, S., & Fitzgerald, J. Paloff, R., & Pratt, K. (2003). The virtual student:
(2006). Handbook of writing research. New York: A profile and guide to working with online learn-
Guilford Press. ers. San Francisco: Jossey-Bass.
Memon, A., Shih, L., & Thinger, B. (2006, June). Rice, J. (2007). The rhetoric of cool: Composition
Development and delivery of nuclear engineering studies and the new media. Carbondale: Southern
online courses: The Excelsior College experi- Illinois University Press.
ence. Paper presented at the meeting of the ASEE
Sahin, S. (2008). The relationship between student
Annual Conference & Exposition: Excellence in
characteristics, including learning styles, and
Education, Chicago, IL.
their perceptions and satisfactions in web-based
Messick, S. (1989). Validity. In R. L. Linn (Ed.). courses in higher education. Turkish Journal of
Educational measurement (3rd edition, pp. 13- Distance Education, 9, 123-138.
103). New York: American Council on Educa-
Scharf, D., Elliot, N., Huey, H. A., Briller, V., &
tion/Macmillan.
Joshi, K. (2007). Direct assessment of informa-

168
Validation of E-Learning Courses in Computer Science and Humanities

tion literacy using writing portfolios. Journal of Watkins, R., & Corry, M. (2005). E-Learning
Academic Librarianship, 33, 462-477. companion: A student’s guide to online success.
Boston: Houghton-Mifflin.
Shavelson, R. J., & Huang, L. (2003, January/
February). Responding responsibly to the frenzy Whitworth, B. (2006). Social-technical systems.
to assess learning in higher education. Change, In C. Ghaoui (Ed.), Encyclopedia of Human
11–19. Computer Interaction (pp. 533-541). London:
Idea Group Reference.
Simonson, C. L. S. (1996). Teaching caring to
nursing students. Journal of Nursing Education, Whitworth, B., & Friedman, R. (2008). The chal-
35, 100-104. lenge of modern academic knowledge exchange.
SIGITE Newsletter, 5(2), 4-10.
Sloan-C. (2008). A consortium of institutions and
organizations committed to quality online educa- Williams, P. E., & Hellman, C. M. (2004). Dif-
tion. Retrieved October 4, 2008, from http://www. ferences in self-regulation for online learning
sloan-c.org/ between first- and second-generation college
students. Research in Higher Education, 45,
Swan, K., Shea, P., Fredericksen, E., Pickett, F.,
71-82.
Pelz, W. & Maher, G. (2001). Building Knowledge
Building Communities: Consistency, Contact Winner, L. (1980). Do artifacts have politics?
and Communication in the Virtual Classroom, Daedalus, 101, 121-136.
Journal of Educational Computing Research,
Zhang, L., & Sternberg, R. J. (2005). A threefold
23(4/2000), 359 - 383.
model of intellectual styles. Educational Psychol-
Ulmer, G. L. (2003). Internet invention: From ogy Review, 17(1), 1-53.
literacy to electracy. New York: Longman.
Zhang, L., & Sternberg, R. J. (2006) The nature
Wallace, R. M. (2005). Online learning in higher of intellectual styles. New Jersey: Erlbaum.
education: A review of research on interactions
Zimmerman, M. C. (2002). Academic self-regu-
among teachers and students. Education, Com-
lation explains persistence and attrition in web-
munication, and Information, 3, 241-280.
based courses: A grounded theory. Unpublished
doctoral dissertation, Northern Arizona Univer-
sity, Flagstaff.

169

You might also like