You are on page 1of 308

E-learning Systems Success Model:

The Case of Ethiopian Higher


Education Institutions

Yonas Hagos Tesfaselassie

A Thesis submitted in Partial Fulfillment of the Requirements for the Degree


of Doctor of Philosophy in Information Technology

(Information Systems)

School of Graduate Studies

IT PhD Program

Addis Ababa University

Addis Ababa, Ethiopia

March, 2019

i| P a g e
Declaration of Originality

I certify that except proper acknowledgement was made, the result of the research work is
that of the author alone and has not been submitted to any other academic award in any
other University.

Author‘s Signature ___________________________On__________________

ii| P a g e
iii| P a g e
Acknowledgement

“If I have seen a little further, it is because I am standing on the shoulder of giants”.

Isaac Newton

Completion of this PhD thesis work was possible with the unreserved support of several people.
First and foremost, I am extremely grateful to my principal advisor, Professor Monica Garfield.
She was generously shared me her immense knowledge, consistent support, dedication,
motivation, and all the appreciations and heartfelt thanks that even words could not be found to
represent the support I received through all the PhD research work. Besides her consistent advice
to get this dissertation work done, her and her great family‘s warm receptions at her house while
I had a research visit at Bentley University, Waltham, Massachusetts (USA) was also something
I never forget and would like to extend my indebt gratitude to her and all members of her family.
I could not have imagined having a better advisor and mentor for my PhD study.

My special thanks also extended to my co-advisor, Dr Salehu Anteneh. His unreserved support,
patience, guidance, friendly, valuable and incredible scholarly suggestions made this research to
be successful at last.

I would also like to thank the valuable and constructive suggestions I received from Dr. Temtim
Assefa, member of the dissertation committee in improving the quality of the thesis.

I am also grateful to Dr. Tibebe Beshah, IS track coordinator, for his consistent motivation
administrative support in facilitating my research visit to Bentley University.

I am also thankful to Dr. Workshet Lamenew and Dr. Getachew Hailemariam for their
constructive feedbacks that gave me great energy and devotion in the completion of this thesis.

I would like to express my great appreciation to all this study respondents namely ICT directors,
e-learning experts, ICT staff and Students of Addis Ababa University, Adama Science and
Technology University, Arbaminch University, Bahirdar University, Ethiopian Civil Service

i| P a g e
University, Haramaya University, Jimma University, Mekelle University and PESC Information
Systems College. Without their support, this dissertation work couldn‘t be possible.

My special thanks also goes to all academic and administrative staff of Bentley University for
their unreserved support to make accessible all the research facilities, arrange statistical classes,
full access to recent statistical analysis software and the University‘s digital library which
ultimately enhanced the quality of this dissertation work. My special thanks also go to the Addis
Ababa University for sponsoring me with all the required financial support to go to Bentley
University in order to have such great international research exposure.

My sincere heartfelt thanks go to my mother. She gave birth to me, raised me being a single
parent and was always with me in all the difficult times. All my brothers and my lovely sister
were all the way along with me throughout all the PhD process. I wish my father could have seen
my success but praying his soul to rest in peace.

Yonas Hagos

March, 2019

ii| P a g e
List of Acronyms
AAU Addis Ababa University

*ADC Annual Document

AMU Arba Minch University

ASTU Adama Sceince and Technology University

AVE Average Variance Extracted

BDU Bahirdar University

D&M DeLone & McLean

ECSU Ethiopian Civil Service University

*EDC E-learning Document

*EEFG E-learning Experts Focus Group

EHEIs Ethiopian Higher Education Institutions

*EQ E-learning Content Quality

* ESQ E-learning Systems Quality

*ESO E-learning Systems Outcome

ESS E-learning Systems Success

HEI Higher Education Institution

HRU Haramaya University

ICT Information Communication Technology

*ICTD Information Communication Technology Director

*IEF Instructors’ E-learning Factor

* INSPQ Institutional Support Quality

IS Information Systems

iii| P a g e
*ISFG Instructors’ Focus Group

ISS Information Systems Success model

IT Information Technology

JU Jimma University

*LEF Learners’ E-learning Factor

LMS Learning management systems

MoE Ministry of Education

MU Mekelle University

PLS Partial Least Squares

SEM Structural Equation Modeling

*SFG Students Focus Group

*SU Systems Usage

*US User Satisfaction

*These abbreviations are created by the researcher (s) and used in the qualitative and quantitative analyses procedures (detail information
is also provided in appendix 5, 6a, 6b, 7a, and 7b).

iv| P a g e
Abstract
The growth of the internet and advancements in information technology is playing vital role in
almost every organization. E-learning as one of the advancements in information technology is
transforming the educational delivery modalities of higher education institutions. Realizing the
potential benefits of e-learning systems, higher education institutions are investing large amounts of
money to sustainably implement such systems. However, failures exist. Furthermore; due to
contextual factors, there is lack of consensus towards determinant factors of e-learning systems
success that would be applicable across all settings. Accordingly, this study attempted to identify
factors that determine e-learning systems success and placed them in a holistic model in the context
of Ethiopian Higher Education Institutions (EHEIs) that would enable them to sustainably implement
the system. Eisenhardt‟s theory building technique was adopted to develop the study model. Several
different types of e-learning stakeholder groups were participated in the study from nine EHEIs
employing theoretical sampling. The study used mixed research methods in two consecutive study
phases. As the study‟s first phase, qualitative research method was used to explore factors that
determine e-learning systems success and develop a model. In this phase, multiple data collection
methods such as focus group discussions with students, instructors and e-learning experts; in-depth
personal interviews with ICT directors; document reviews; and observation were used in order to
triangulate the study findings. In the second phase, quantitative research method was used to test the
model. In this phase, the model was tested on two levels: Students and Instructors. A total of 529
study participants were involved under the two study phases. NVivo software (Pro version 11) was
used to process the qualitative data. On the other hand, Structural Equation modeling using Smart
PLS to test the model was employed to process the quantitative data. The study identified eight
factors that determine e-learning systems success: institutional support quality, e-learning systems
quality, e-learning content quality, learners‟ e-learning factor, instructors‟ e-learning factor, systems
usage, user satisfaction and e-learning systems outcome. These identified factors were placed in a
model. The study findings along with its model validation test results show that the model has a
predictive power to measure e-learning systems success. Finally, theoretical and practical
contributions were forwarded to researchers and practitioners of this research domain.

Keywords: Information Systems Success (ISS) model, Ethiopian higher education institutions, e-
learning stakeholders, e-learning systems, e-learning systems success.

v| P a g e
DEDICATION

This dissertation work is dedicated to:

The deep rooted love of Jesus Christ and the timely comfort of the
Holy Spirit

vi| P a g e
Table of Contents
Acknowledgement ..................................................................................................................... i

List of Acronyms ..................................................................................................................... iii

Abstract ..................................................................................................................................... v

Table of Contents.................................................................................................................... vii

List of Tables ........................................................................................................................... xv

List of Figures ....................................................................................................................... xvii

Chapter One .............................................................................................................................. 1

Introduction ................................................................................................................................ 1

1.1 Background of the Study ....................................................................................................... 1

1.2. Motivation ............................................................................................................................ 2

1.3. Statement of the Problem ..................................................................................................... 4

1.4. Study Objectives .................................................................................................................. 7

1.4.1 Major Objective .............................................................................................................. 7

1.4.2 Specific Objectives ......................................................................................................... 7

1.5. Definition of Key Terms ...................................................................................................... 8

1.6. Significance of the Study ..................................................................................................... 9

1.7. Scope of the Study................................................................................................................ 9

1.8. Ethical Considerations........................................................................................................ 10

1.9. Organization of the Thesis ................................................................................................. 10

Chapter Two ........................................................................................................................... 12

Literature Review ..................................................................................................................... 12

2.1 Introduction ......................................................................................................................... 12

2.2 Overview of E-learning ....................................................................................................... 12

2.3 Learning Theories and Their Implications for E-Learning ................................................. 15

vii| P a g e
2.3.1 Behaviorism .................................................................................................................. 16

2.3.2 Cognitivism .................................................................................................................. 17

2.3.3 Constructivism .............................................................................................................. 18

2.4 Review of Related Literature in E-learning Systems with the Case of EHEIs ................... 19

2.4.1 Country Profile ............................................................................................................. 19

2.4.2 Application of E-learning in EHEIs ............................................................................. 21

2.5. Technology/ICT Mediated Educational/Learning Models and Concepts .......................... 23

2.5.1. Technological Pedagogical Content Knowledge (TPACK) ........................................ 24

2.5.2 Computer Self-Efficacy Concept.................................................................................. 28

2.5.3 Badrul Khan‘s Framework for Blended Learning ........................................................ 30

2.6 Information Systems Success Models ................................................................................. 31

2.6.1 Empirical Studies Validating the DeLone and McLean Models .................................. 34

2.7. E-learning Systems Success Models .................................................................................. 37

2.8. The Study‘s Seed Constructs.............................................................................................. 45

2.8.1. System Quality ............................................................................................................ 46

2.8.2. Information Quality ..................................................................................................... 46

2.8.3. Service Quality ............................................................................................................ 47

2.8.4. User Satisfaction .......................................................................................................... 48

2.8.5. System Use .................................................................................................................. 48

2.8.6. Net benefits .................................................................................................................. 49

2.9. Chapter Summary............................................................................................................... 51

Chapter Three......................................................................................................................... 53

Research Methodology ............................................................................................................. 53

3.1 Introduction ......................................................................................................................... 53

3.2 Research Design .................................................................................................................. 53

viii| P a g e
3.2.1 Philosophical Position .................................................................................................. 54

3.2.2 Research Approach ....................................................................................................... 56

3.2.3 Research Strategies ....................................................................................................... 56

3.2.3.1 Experiment................................................................................................................. 57

3.2.3.2 Grounded Theory....................................................................................................... 57

3.2.3.3 Ethnography............................................................................................................... 57

3.2.3.4. Archival Research..................................................................................................... 57

3.2.3.5. Action Research........................................................................................................ 58

3.2.3.6. Case Study ................................................................................................................ 58

3.2.3.7. Survey ....................................................................................................................... 58

3.2.4 Research Choices.......................................................................................................... 59

3.2.5 Time Horizons .............................................................................................................. 59

3.3. Model Exploration and Extension Methods ....................................................................... 60

3.3.1 Getting Started .............................................................................................................. 63

3.3.2 Selecting Cases ............................................................................................................. 63

3.3.2.1 Addis Ababa University (AAU) ................................................................................ 65

3.3.2.2 Adama Science and Technology University (ASTU) ............................................... 65

3.3.2.3 Arba Minch University (AMU) ................................................................................. 66

3.3.2.4 Bahir Dar University (BDU) ..................................................................................... 66

3.3.2.5 Ethiopian Civil Service University (ECSU) .............................................................. 67

3.3.2.6 Haramaya University (HRU) ..................................................................................... 69

3.3.2.7 Jimma University (JU)............................................................................................... 69

3.3.2.8 Mekelle University (MU) .......................................................................................... 70

3.3.2.9 PESC Information Systems College .......................................................................... 71

3.3.3 Sampling method .......................................................................................................... 71

ix| P a g e
3.3.4 Crafting Instruments & Protocols, and Entering the Field ........................................... 72

3.3.5 Data Analysis Technique .............................................................................................. 77

3.3.6 Shaping Hypothesis ...................................................................................................... 79

3.3.7 Enfolding Literature and Reaching Closure ................................................................. 80

3.3.8 Model Testing ............................................................................................................... 80

3.3.8.1 Model Testing Quantitative Data Analysis tools .................................................. 85

3.4 Chapter Summary................................................................................................................ 90

Chapter Four .......................................................................................................................... 91

Qualitative Data Analysis and Findings ................................................................................... 91

4.1 Introduction ......................................................................................................................... 91

4.2. Data Presentation, Analysis and Findings .......................................................................... 91

4.2.1. Institutional Support Quality ....................................................................................... 92

4.2.1.1. Technical Support ................................................................................................ 95

4.2.1.2. E-learning Infrastructure ...................................................................................... 98

4.2.1.3. E-learning Policy ............................................................................................... 100

4.2.1.4. Motivation .......................................................................................................... 102

4.2.1.5. Computer Training ............................................................................................. 104

4.2.1.6. Quality Assurance Mechanisms ......................................................................... 107

4.2.1.7. Intellectual Property Protection ......................................................................... 109

4.2.2. User Satisfaction ........................................................................................................ 110

4.2.2.1. Enjoyable Experience ........................................................................................ 112

4.2.2.2. Overall Satisfaction ............................................................................................ 114

4.2.3. Systems Usage ........................................................................................................... 117

4.2.3.1. Frequency of Use and Dependability................................................................. 119

4.2.4. E-learning Systems Quality ....................................................................................... 123

x| P a g e
4.2.4.1. User Friendly ..................................................................................................... 125

4.2.4.2. Systems Useful Feature...................................................................................... 126

4.2.4.3. Ease of Use ........................................................................................................ 128

4.2.4.4. Systems Availability .......................................................................................... 130

4.2.4.5. Systems Response .............................................................................................. 131

4.2.4.6. Secure................................................................................................................. 132

4.2.5. E-learning Content Quality........................................................................................ 133

4.2.5.1. Relevant Content, Up-to- Date Content and Course Quality ............................. 135

4.2.6. E-learning Systems Outcome .................................................................................... 137

4.2.6.1. Enhanced Academic or Teaching Performance, Time Savings and Cost Savings ......... 139

4.2.7. Learners‘ E-learning Factor ....................................................................................... 142

4.2.7.1. Computer Self-Efficacy ..................................................................................... 144

4.2.7.2. Attitude Towards E-learning............................................................................. 146

4.2.8. Instructors‘ E-learning Factor .................................................................................... 147

4.2.8.1. Creating E-learning Environment ...................................................................... 149

4.2.8.2. Computer Self-Efficacy ..................................................................................... 151

4.2.8.3. Attitude towards E-learning ............................................................................... 152

4.2.8.4. Timely Response ................................................................................................ 153

4.3 Shaping the Study‘s Hypothesis ........................................................................................ 154

4.3.1 Institutional Support Quality ...................................................................................... 154

4.3.2. Learners‘ E-learning Factor ....................................................................................... 155

4.3.3. E-learning Systems Quality ....................................................................................... 156

4.3.4. E-learning Content Quality........................................................................................ 157

4.3.5. Instructors‘ E-learning Factor .................................................................................... 157

4.3.6. User Satisfaction ........................................................................................................ 158

xi| P a g e
4.3.7. Systems Usage ........................................................................................................... 158

4.3.8. E-learning Systems Outcome .................................................................................... 159

4.4. The Study‘s Proposed Model ........................................................................................... 159

4.5. Chapter Summary............................................................................................................. 162

Chapter Five ......................................................................................................................... 163

Model Testing......................................................................................................................... 163

5.1 Introduction ....................................................................................................................... 163

5.2 Testing the Proposed Model over Students‘ Sample ........................................................ 163

5.2.1 Descriptive Statistics of Students‘ Sample ................................................................. 163

5.2.1.1 Data Outliers and Doubtful Response Patterns ................................................... 163

5.2.1.2 Missing Values.................................................................................................... 164

5.2.1.3 Data Distribution................................................................................................. 164

5.2.1.4. Sample Size and Distribution............................................................................. 164

5.2.1.5 Respondents‘ Demography ................................................................................. 165

5.2.2 Testing the Model and its Hypothesis ........................................................................ 166

5.2.2.1 Validating the Measurement Model.................................................................... 166

5.2.2.2 Validating the Structural Model.......................................................................... 168

5.2.2.3 Discussions of Structural Model Tests and Hypothesis Testing ......................... 171

5.2.2.4 Effect Size and Predictive Relevance ................................................................. 174

5.3 Testing the Proposed Model over Instructors Samples ..................................................... 175

5.3.1 Descriptive Statistics of Instructors‘ Sample.............................................................. 175

5.3.1.1 Data Outliers and Doubtful Response Patterns ................................................... 175

5.3.1.2 Missing Values.................................................................................................... 175

5.3.1.3 Data Distribution................................................................................................. 175

5.3.1.4 Sample Size and Distribution.............................................................................. 175

xii| P a g e
5.3.1.5 Respondents‘ Demography ................................................................................. 176

5.3.2 Testing the Model and its Hypotheses ........................................................................ 177

5.3.2.1 Validating the Measurement Model.................................................................... 177

5.3.2.2 Validating the Structural Model.......................................................................... 179

5.3.2.3 Discussions of Structural Model Tests and Hypothesis Testing ......................... 182

5.3.2.4 Effect Size and Predictive Relevance ................................................................. 185

5.4 Chapter Summary.............................................................................................................. 185

Chapter Six ........................................................................................................................... 187

Discussion of Results ............................................................................................................. 187

6.1 Introduction ....................................................................................................................... 187

6.2 E-learning Systems Success Model................................................................................... 187

6.2.1 Institutional Support Quality ...................................................................................... 188

6.2.2 Learners‘ E-Learning Factor ...................................................................................... 191

6.2.3 E-Learning Systems Quality ....................................................................................... 192

6.2.4 E-learning Content Quality......................................................................................... 194

6.2.5 Instructors‘ E-learning Factor ..................................................................................... 196

6.2.6 User Satisfaction ......................................................................................................... 198

6.2.7 Systems Usage ............................................................................................................ 200

6.2.8 E-learning Systems Outcome ..................................................................................... 202

6.3 Chapter Summary.............................................................................................................. 206

Chapter Seven ....................................................................................................................... 207

Conclusion & Implications..................................................................................................... 207

7.1 Introduction ....................................................................................................................... 207

7.2 Summary of Key Findings ................................................................................................ 207

7.3 Conclusion......................................................................................................................... 214

xiii| P a g e
7.4 Contributions of the Study ................................................................................................ 216

7.4.1 The Study‘s Contribution to Theory ........................................................................... 216

7.4.2 Contribution to Practice .............................................................................................. 218

7. 5 The Study‘s Limitation and Implications for Further Research ....................................... 220

References .............................................................................................................................. 222

Appendix 1. In-depth- Personal Interview Protocol for ICT directors ................................... 259

Appendix 2a. Focus Group Discussion Questions: For Instructors‘ Focus Group ................. 260

Appendix2b: Focus Group Discussion Questions: For Students‘ Focus Group ..................... 261

Appendix 2c: Focus Group Discussion Questions: For E-learning Experts Focus Group ..... 262

Appendix 3. Document Checklist ........................................................................................... 263

Appendix 4. Observation Checklist ........................................................................................ 264

Appendix 5: Details of Qualitative Study Participants (Focus Group Participants, In-Depth


Personal Interviews)......................................................................................... 265

Appendix 6a: Codes Assigned To Each of the Focus Group Discussion Participants ........... 266

Appendix 6b: Codes Assigned To In-Depth Personal Interviews Made with ICT Directors, and
Archival Evidences (Documents) .................................................................... 267

Appendix 7a: The Model Constructs, Items, and Item Codes: For Students‘ Questionnaire . 268

Appendix 7b: The Model Constructs, Items, and Item Codes: For Instructors‘ Questionnaire
.......................................................................................................................... 272

Appendix 8a: Instructor Questionnaire ................................................................................... 277

Appendix 8b: Student Questionnaire ...................................................................................... 281

Appendix 9: Publications ....................................................................................................... 286

xiv| P a g e
List of Tables
Table 1: IS Empirical Studies Validating the DeLone & McLean Models ........................ 35
Table 2: IS Empirical Studies Validating the Systems Quality Construct.......................... 46
Table 3: IS Empirical Studies Validating the Information Quality Construct .................... 47
Table 4: IS Empirical Studies Validating the User Satisfaction Construct ........................ 48
Table 5: IS Empirical Studies Validating the Net benefits Construct ................................ 50
Table 6: Research Philosophies .......................................................................................... 55
Table 7: Technique/ Process of Theory Building from Case Study Research ................... 62
Table 8: Selected Cases/ Ethiopian Higher Education Institutions (EHEIs)
Implementing E-learning Systems .................................................................... 64
Table 9: Qualitative Data Sources ...................................................................................... 76
Table 10: Criteria for Judging the Quality of Qualitative Case Study Research ................ 81
Table 11: Results of Pilot Study for the Questionnaire Items ............................................ 84
Table 12: Overall Study Participants .................................................................................. 85
Table 13: SEM-PLS Measurement Model Test Threshold Values .................................... 88
Table 14: SEM-PLS Structural Model Tests Threshold Values ......................................... 89
Table 15: Institutional Support Quality ............................................................................. 93
Table 16: User Satisfaction ................................................................................................. 111
Table 17: Systems Usage .................................................................................................... 118
Table 18: E-learning Systems Quality ................................................................................ 124
Table 19: E-learning Content Quality ................................................................................ 134
Table 20: E-learning Systems Outcome ............................................................................. 138
Table 21: Learners‘ E-learning Factor ................................................................................ 143
Table 22: Instructors‘ E-learning Factor ............................................................................. 148
Table 23: Students‘ Sample Size ........................................................................................ 165
Table 24: Gender Distribution ............................................................................................ 165
Table 25: Age Distribution of Students‘ Respondents ....................................................... 166
Table 26: Reliability and Validity Test Results .................................................................. 167
Table 27: Discriminant Validity Test ................................................................................. 168
Table 28: Structural Model Hypothesis Testing for Bootstrapping Direct Effect
Results ................................................................................................................ 169

xv| P a g e
Table 29: Summary of Coefficient of Determination (R2) Values ..................................... 170
Table 30: Instructors‘ Sample Size ..................................................................................... 176
Table 31: Gender Distribution ............................................................................................ 176
Table 32: Age Distribution of Instructors‘ Respondents .................................................... 177
Table 33: Reliability and Validity Test Results ................................................................. 178
Table 34: Discriminant Validity Test ................................................................................. 178
Table 35: Structural Model Hypothesis Testing for Bootstrapping Direct Effect
Results ................................................................................................................ 180
Table 36: Summary of Coefficient of Determination (R2) values ...................................... 181
Table 37: Summary of the Model‘s Measures of the Constructs........................................ 204
Table 38: Summary of the Model‘s Hypotheses/Outputs of the Path Model under
each Tested Samples (i.e. Students‘ and Instructors‘ Samples) ......................... 205
Table 39: The Study‘s Key Findings in Relation to the Seed Constructs of the
Updated ISS Model ............................................................................................ 209

xvi| P a g e
List of Figures
Fig.1 The Overall Study Structure ...................................................................................... 11
Fig.2 Trends in Undergraduate Enrollment in Public and Private Higher Education
Institutions ................................................................................................................ 20
Fig.3 Trends in Postgraduate Enrollment in Public and Private Higher Education
Institutions ................................................................................................................ 20
Fig. 4 The TPACK Framework ......................................................................................... 27
Fig. 5 Khan‘s Octagonal Framework ................................................................................. 30
Fig.6 Basic Concepts Underlying Acceptance Models ...................................................... 31
Fig.7 The Original IS Success Model ..................................................................................... 32
Fig.8 The Updated IS Success Model ................................................................................. 34
Fig.9 E-learning Success Model ......................................................................................... 38
Fig.10 Extended E-learning Success Model ....................................................................... 39
Fig.11 Virtual Communities‘ Model................................................................................... 40
Fig.12 Measuring E-learning Systems Success Model ....................................................... 41
Fig.13 E-learning Hierarchical Model ................................................................................ 42
Fig.14 Lwoga‘s Model ........................................................................................................ 43
Fig.15 The Research Onion ................................................................................................ 54
Fig.16 BDU Students Taking Online Exams via the E-learning System ........................... 67
Fig.17 ECSU Online Campus View ................................................................................... 68
Fig.18 NVivo‘s Qualitative Data Analysis Results of Institutional Support Quality ........ 95
Fig.19 NVivo‘s Qualitative Data Analysis Results of Institutional Support Quality:
Technical Support..................................................................................................... 95
Fig.20 NVivo‘s Qualitative Data Analysis Results of Institutional Support Quality:
Availability of E-Learning Infrastructure ................................................................ 98
Fig.21 NVivo‘s Qualitative Data Analysis Results of Institutional Support Quality:
E-Learning Policy .................................................................................................... 100
Fig.22 NVivo‘s Qualitative Data Analysis Results of Institutional Support Quality:
Motivation ............................................................................................................... 102
Fig.23 NVivo‘s Qualitative Data Analysis Results of Institutional Support Quality:
Computer Training .................................................................................................. 105

xvii| P a g e
Fig.24 NVivo‘s Qualitative Data Analysis Results of Institutional Support Quality:
Quality Assurance Mechanisms .............................................................................. 107
Fig.25 NVivo‘s Qualitative Data Analysis Results of Institutional Support Quality:
Intellectual Property Protection ................................................................................ 109
Fig.26 NVivo‘s Qualitative Data Analysis Results of User Satisfaction Determining
E-learning Systems Success .................................................................................... 112
Fig.27 NVivo‘s Qualitative Data Analysis Results of User Satisfaction: Enjoyable
Experience ............................................................................................................... 112
Fig.28 NVivo‘s Qualitative Data Analysis Results of User Satisfaction: Overall
User Satisfaction ...................................................................................................... 114
Fig.29 NVivo‘s Qualitative Data Analysis Results of Systems Usage Factor
Determining E-learning Systems Success ............................................................... 119
Fig.30 NVivo‘s Qualitative Data Analysis Results of Systems Usage: Frequency of
Use and Dependability Lower-Level ...................................................................... 119
Fig.31 NVivo‘s Qualitative Data Analysis Results of E-learning Systems Quality
Factor Determining E-learning Systems Success .................................................... 125
Fig.32 NVivo‘s Qualitative Data Analysis Results of E-learning Systems Quality:
User Friendly ............................................................................................................ 125
Fig.33 NVivo‘s Qualitative Data Analysis Results of E-learning Systems Quality:
Systems Useful Feature ............................................................................................ 127
Fig.34 NVivo‘s Qualitative Data Analysis Results of E-learning Systems Quality:
Systems Ease of Use ................................................................................................ 129
Fig.35 NVivo‘s Qualitative Data Analysis Results of E-learning Systems Quality:
System Availability ................................................................................................. 130
Fig.36 NVivo‘s Qualitative Data Analysis Results of E-learning Systems Quality:
Systems Response ................................................................................................... 131
Fig.37 NVivo‘s Qualitative Data Analysis Results of E-learning Systems Quality:
Secure ...................................................................................................................... 132
Fig.38 NVivo‘s Qualitative Data Analysis Results of E-learning Content Quality .......... 135
Fig.39 NVivo‘s Qualitative Data Analysis Results of E-learning Content Quality
Categories ................................................................................................................. 135

xviii| P a g e
Fig.40 NVivo‘s Qualitative Data Analysis Results of E-learning Systems Outcome
as Determinant Factor for E-learning Systems Outcome ........................................ 139
Fig.41 NVivo‘s Qualitative Data Analysis Results of Time Savings, Enhanced
Academic / Teaching Performance, Cost Savings ................................................... 140
Fig.42 NVivo‘s Qualitative Data Analysis Results of Learners‘ E-learning Factor .......... 144
Fig.43 NVivo‘s Qualitative Data Analysis Results of Learners‘ E-learning Factor:
Computer Self-Efficacy ........................................................................................... 144
Fig.44 NVivo‘s Qualitative Data Analysis Results of Learners‘ E-learning Factor:
Attitude Towards E-learning ................................................................................... 146
Fig.45 NVivo‘s Qualitative Data Analysis Results of Instructors‘ E-learning Factor
Determining E-learning Systems Success ............................................................... 149
Fig.46 NVivo‘s Qualitative Data Analysis Results of Instructors‘ E-learning Factor:
Creating an E-learning Environment ....................................................................... 150
Fig.47 NVivo‘s Qualitative Data Analysis Results of Instructors‘ E-learning Factor:
Computer Self-Efficacy ........................................................................................... 151
Fig.48 NVivo‘s Qualitative Data Analysis Results of Instructors‘ E-learning Factor:
Attitude Towards E-learning ................................................................................... 152
Fig.49 NVivo‘s Qualitative Data Analysis Results of Instructors‘ E-learning Factor:
Timely Response ..................................................................................................... 153
Fig.50 Proposed E-learning Systems Success Model ......................................................... 161
Fig.51 The Overall Proposed Model Tests results over Students‘ Sample......................... 170
Fig.52 The Overall Proposed Model Test results over Instructors‘ Sample ....................... 181

xix| P a g e
Chapter One
Introduction
1.1 Background of the Study

Recently, information technology is becoming a potential solution for higher education


institutions in improving quality of education and reducing educational related costs (Yakubu
and Dasuki, 2018; Tossy and Brown, 2017; Darawsheh et al.,2016). Furthermore, the continuous
advancement of information technology is also making teaching and learning to be easily
accessible giving rise to the birth of e-learning (Masa‘deh et al., 2016). E-learning is a special
form of information systems. It is defined as an innovative approach to educational delivery via
electronic forms of information that enhances the learner‘s knowledge, skills and performance
(Clark and Mayer, 2011). E-learning has various benefits, such as personalized learning, ease of
access to learning, effective and efficient means to standardize and deliver content, on-demand
content availability, interactivity, and self-pacing (Obeidat et al., 2015). It also provides flexible,
convenient and diverse learning environments to meet learners‘ needs being from anywhere and
at any time (Hajir et al., 2015; Salter et al., 2014). Moreover, e-learning can potentially reduce
costs for the need of large numbers of classrooms and instructors, and for other related
educational facilities (Moravec et al., 2015). It also offers considerable value in the effectiveness
and efficiency of institutional performances (Arkorful & Abaidoo, 2014).

Several studies show that developing countries would reap the benefits of e-learning by
effectively and efficiently utilizing the shortage of qualified instructors and educational facilities
(Mtebe and Raisamo, 2014; Khan et al., 2012). Most of the African higher education institutions
get source of funding for their e-learning initiatives from developed nations. These initiatives,
however, are characterized by failure stories in an effort to mimic e-learning models developed
and used by that of higher education institutions of developed nations (Tossy, 2017; Boyi, 2014;
Manduku et al., 2012; Farahat, 2012; Nihuka and Voogt, 2012; Eke, 2011; Matti et al., 2010;
Ndume et al., 2008).

A closer inspection towards the current use of e-learning in Ethiopian HEIs also reveals that the
successful implementations and sustained use of e-learning systems is challenged by several

1| P a g e
multi-dimensional factors (Hagos et al., 2018; Webometrics, 2016; Ketema and Nirmala, 2015;
Bass, 2011; Beyene, 2010, Tibebu et al., 2009; Unwin, 2008; Hare, 2007).

Being a special form of information systems, e-learning systems success depends on various
contextual factors which vary across different countries depending on the types of users, culture,
technology, available infrastructure and cost of e-learning implementations (Tossy, 2017;
Lwoga, 2014). Measuring information systems success in general and e-learning systems, in
particular, is vital to realize the benefit it offers and maintain its sustained usages (Aparicio et al.,
2016; Holsapple and Lee, 2006). Accordingly, this study is designed to identify e-learning
systems success factors and develop a holistic model taking the case of Ethiopian Higher
Education Institutions (EHEIs) that would help them to sustainably implement E-learning
systems. Finally, the study forwarded relevant theoretical and practical contributions.

1.2. Motivation

According to the researcher‘s working experience in e-learning and relevant e-learning literature
with the case of EHEIs, e-learning implementation at EHEIs is characterized by various failure
factors mainly due to low-IT diffusion or culture among the academia to integrate IT in to the
teaching and learning process, low-level of institutional-wide support and lack of existing
models or insufficient literature that could serve as a guide to structure the successful
implementations and sustained usages of e-learning systems which resulted in lack of sustainably
integrating e-learning to the teaching and learning process either in a purely or supporting the
traditional teaching modality (Belwal et al., 2010; information obtained from relevant
departments of EHEIs adopting e-learning systems, 2018). EHEIs adopting e-learning systems
also focus more on the technology aspect than on user, institutional and other related aspects
which makes identifying factors affecting e-learning systems success to be partially addressed
(Hagos et al., 2018). Beyene (2010) also stressed that failures of e-learning systems in EHEIs is
also accounted due to the fact that the challenges in the applications of e-learning systems in
EHEIs are not well taken in to account from different perspectives of e-learning stakeholders,
mainly because problems of e-learning systems are usually seen solely from the view point of
problems of technological infrastructure (Beyene, 2010). E-learning systems success depends on
complex and varied factors than the technology alone (Bhuasiri et al., 2012). Overlooking the

2| P a g e
interdependent factors which affect e-learning systems success has massive impact in the overall
success of the system (Eom & Ashil, 2018).

In developing countries like Ethiopia where the availability of skilled manpower, classroom
facilities, and printed materials are so scarce and costly, the application of e-learning in the
teaching and process plays significant role in reducing costs and improving quality of education
(Selim, 2007). However, as with any information system, e-learning should be evaluated
periodically by identifying determinant factors that affect its successful implementations and
sustained usages in order to realize its potential benefits (Ozkan & Koseler, 2009; Lwoga, 2014).

Though there are ad hoc efforts at institutional levels, the sustained usages of e-learning systems
success across EHEIs is challenged by complex factors which hampered its successful
implementations to sustainably support the conventional teaching and learning process or purely
go online (Ketema and Nirmala, 2015; Beyene, 2006; and information received from relevant
departments of EHEIs adopting e-learning systems, 2018).

Several studies show that factors affecting e-learning systems success vary across different
settings depending on contextual factors which resulted in lack of consensus among studies to
propose an e-learning systems success model that could serve across all contexts (Yilmaz, 2017;
Uppal et al. 2017). Puskin et al (2011) pointed out that the degree of impact of e-learning
systems factors differ depending on user and technology types further emphasizing its contextual
dependent feature of e-learning systems. Sun et al (2008) has developed a model by identifying
factors affecting e-learning systems as information quality, systems quality, service quality,
systems usage and user satisfaction. They did not find computer self-efficacy (i.e. on students
and instructors) as determinant factors affecting e-learning systems in developed nations. On the
other hand; in studies such as Yakubu & Dasuki (2018), Bhusairi et al. (2012) and Arbaugh
(2002), computer self-efficacy is most cited determinant factors affecting e-learning systems
success in developing countries where there is low-IT diffusion. Eom (2011) did not find
instructor‘s quality as expressed in terms of attitudes towards e-learning as critical factors
affecting e-learning systems success in a university with high level of e-learning systems usage.
However, this finding was countered by Lwoga (2014) as being critical factor affecting e-
learning systems success in Tanzanian University exhibiting low-level of e-learning usage.

3| P a g e
In light of the above facts, the researcher is motivated to identify factors that determine e-
learning systems success based on views of several key e-learning stakeholders and develop a
holistic model that could help EHEIs to periodically evaluate their e-learning systems in a way
that would enhance its successful implementations and sustained usages. Since education is a
cornerstone of any society‘s development, developing e-learning systems success model is
strategically important to EHEIs, users of the system and to the society in general. Providing
such solution will have both theoretical and practical contributions with respect to other
countries exhibiting similar features to that of EHEIs.

1.3. Statement of the Problem

Several studies show that e-learning systems being the main product of adopting and using the
new and more advanced IT innovations in the education sector has encountered problems of
failure (Almajali et al., 2016; Tarhini et al., 2016; Masa‘deh et al., 2016; Hajir et al., 2015;
Selim, 2007). According to studies by Kattoua et al. (2016) and Sun et al. (2008), in spite of the
immense potential benefits of e-learning systems it provides to higher educational institutions,
failures exist. The authors have also pointed out that despite the considerable resources invested
in e-learning systems, little is known about determinant factors that could contribute to the
success of e-learning systems. Holspal et al. (2006), Arbaugh (2002), Arbaugh & Duray (2002),
Hong (2002), Piccoli et al. (2001) have identified six dimensions that could result in e-learning
users‘ satisfaction as : student, instructor, the type of course, the kind of technology used, the e-
learning system design, and environmental dimensions. However, Sun et al. (2008) argue that the
researchers‘ suggestions are impractical due to the complex nature of e-learning. Moreover, the
identified six dimensions reflect only one construct ―user satisfaction‖. Accordingly, they have
recommended that studies that identify factors contributing to e-learning systems success and
placing them in a holistic model are vital for theory and practice. Similarly; several other studies
such as Eom & Ashill (2018), Aparicio et al. (2016), Dorobaţ (2014), Lwoga (2014), Wu &
Zhang (2014), Venter et al. (2012), Tai et al. (2012), Farahat (2012), Cheng (2012), Bhuasiri et
al. (2012), Pusnik et al. (2011), Chen (2010), Floropoulos et al. (2010), Wang et al. (2007),
Selim (2007), and Holsappale & Lee-Post ( 2006) reported that there is lack of consensus
towards an e-learning systems success model that could be adopted across all settings mainly

4| P a g e
because of the fact that e-learning systems success depends on user types, technology, culture
and other contextual factors.

Many higher education institutions in African countries, including Ethiopia, incurred investments
in e-learning content development and its timely updates, salaries and incentives paid to direct
and indirect e-learning staff involved in the implementations of e-learning systems, and e-
learning infrastructure such as availing dedicated e-learning labs & e-studios, relevant e-learning
software such as authoring systems, and data centers in order to make the implementations of e-
learning systems successful (Yakubu et al., 2018; Tossy, 2017; Ketema and Nirmala, 2015;
Tyechia, 2014; Lwoga, 2014; Nagunwa and Lwoga, 2013; Ssekakubo et al., 2011; Rhema and
Miliszewska, 2010; Matti et al., 2010; Beyene, 2010; Gunga and Ricketts, 2007; and information
obtained from relevant departments of EHEIs adopting e-learning systems). However; these
studies emphasized that despite the investments made, e-learning systems in most of the African
universities is characterized by various failure factors in an effort to mimic e-learning studies and
models developed for institutions of developed countries. In HEIs various types of Learning
Management Systems (LMS) are used. According to Hariri (2013), LMS is defined as a platform
that aids virtual learning environment possible through the application of e-learning software to
manage e-learning contents, make collaboration easy, course delivery, run online exams, track
students‘ learning progress and make knowledge sharing easy among faculties and students to
support teaching and learning process in HELIs irrespective of time and distance. The most
popular forms of LMS used in HEIs are Blackboard and Moodle (Pishva et al., 2010). Although
these popular forms of LMS are widely used in HEIs world-wide, the availability of such tools
cannot assure e-learning systems success. Carvalho et al. (2011) argues that although higher
education institutions are heavily investing in online /e-learning technologies, there are critical
concerns in that these LMS are only being used as electronic repositories of instructor teaching
materials. Aparicio (2016), Wu & Zhang (2014), Selim (2007) and Holsappale & Lee-Post
(2006) argue that people are not robots that we prescribe specific technologies rather other
factors such as user and institutional support factors need to be identified in order to achieve the
desired potential benefits of e-learning systems.

Beyene (2010; 2006) argues that there are growing interests and awareness to integrate e-
learning in the teaching and learning process of EHEIs. However, the problems are not only

5| P a g e
massive but are also not well conceived from the perspectives of multiple e-learning stakeholders
which resulted in lack of sustainably utilizing the benefits of e-learning systems (or LMS used in
EHEIs). Furthermore, EHEIs adopting e-learning faces generic challenges and problems in terms
of cost of technology (i.e. Ethiopia has only one Telecom operator making the cost to be high to
be afforded and access to internet and advancement of IT to be more complicated), the type of
ICT policy, culture, existence of complex network of actors and low-level of IT diffusion as
compared to higher education institutions of developed nations (Beyene, 2010; Tibebu et al.,
2009; Hare, 2007; Beyene, 2006). Nawaz & Kundi (2010) and Teder et al. (2009) also
emphasized that although some of the challenges of e-learning implementations in developing
countries are similar, some are unique to particular countries. To this end, the authors stressed
that e-learning success models need to be developed in line with the human and contextual
factors of any country by asserting that no two developing countries are alike which makes
factors affecting e-learning systems success to vary across different countries. Hagos & Negash
(2014) also pointed out that the adoption rate of e-learning systems in Ethiopian higher level
education is so promising in that EHEIs need to effectively utilize its potential benefits of e-
learning systems in supporting its teaching and learning process.

E-learning systems success depends on a complex of socio-technical network of actors which


requires thorough investigation to come-up with a holistic e-learning systems success model
(Tarhini et al, 2016; Masa‘deh et al., 2016; Moravec et al., 2015; Mothibi, 2015). Furthermore;
different groups of e-learning stakeholders (i.e. instructors, students, IT staff, and e-learning
developers) deal with e-learning systems (Selim, 2007). However, these e-learning systems
stakeholders reveal significant differences and often conflicting viewpoints exist (Ozkan et al.,
2009; Sedera et al., 2007). This subject is still a critical concern in the fields of information
systems, in general, and in e-learning systems, in particular (Arkorful & Abaidoo, 2014; Bhuasiri
et al., 2012; Selim, 2007; Holsapple & Lee-Post, 2006; DeLone and McLean, 2003).

Overall, e-learning being dependent on contextual factors such as user types, political, cultural,
technology, infrastructure, and associated costs of implementations (Yakubu and Dasuki, 2018;
Noesgaard & Orngreen, 2015; Lwoga, 2014; Karaaslan 2013; Venter et al., 2012; Puskin et al.,
2011; Nduman, 2008; Sun et al., 2008), difficulty of understanding the concept of e-learning
because different authors use the term differently depending on particular professional

6| P a g e
approaches and interests (Tossy et al., 2017; Sangra et al., 2012; Cohen and Nycz, 2006) and
conflicting interests among different groups of e-learning stakeholders (Beyene, 2010; Sun et al.
2008; Sedera et al., 2007) all contributed to the lack of consensus towards what factors determine
the success of e-learning systems that could serve across all contexts (Uppal et al., 2017;
Aparicio et al., 2016; Wu & Zhang, 2014; Tai et al., 2012; Farahat, 2012; Cheng, 2012; Pusnik et
al., 2011; Chen, 2010; Floropoulos et al., 2010; Wang et al., 2007).

Accordingly, the researcher attempted to fill this research gap by identifying factors that
contribute towards e-learning systems success based on different perspectives of e-learning
stakeholders and develop a holistic model taking the case of Ethiopian higher education
institutions.

In view of the research problem, this study addressed the following specific research questions:

1. What are the factors that determine e-learning systems success in Ethiopian higher education
institutions context?

2. What kind of e-learning systems success model should be used by Ethiopian higher
education institutions to evaluate e-learning systems success?

1.4. Study Objectives

1.4.1 Major Objective

The major objective of this study is to develop an e-learning systems success model in Ethiopian
higher education institutions context.

1.4.2 Specific Objectives

The specific objectives of the study are:

o To identify factors that determine e-learning systems success in EHEIs context based on
different views of e-learning stakeholders.
o To place the identified factors that determine e-learning systems success in a holistic
model that may help EHEIs evaluate their e-learning systems. Identified factors that
contribute to the success of e-learning systems were placed in a model and their
relationships were determined from empirical findings of the study. Validity and

7| P a g e
reliability tests were performed with regard to the model constructs. Finally, the overall
model was tested with respect to its Measurement and Structural model components in
order to determine its power in evaluating e-learning systems success.

1.5. Definition of Key Terms

E-learning Systems: It is defined as the use of online system that integrates the conventional or
traditional teaching and learning method by use of e-learning technology and web browser as the
main interaction tool (Sangra et al. 2012; Kirkwood, 2009; Mason and Rennie, 2006).

Model: Defined as an abstracted way of depicting a process which reveals relationships among
variables that could be generalized to solve problems or practical guidance for realization of
theories (Muller and Urbach, 2013).

Theory: Is a set of constructs or abstract concepts, definitions, and propositions that present a
systematic view of phenomena by specifying relations among variables with the purpose of
explaining and predicting phenomena (Suddaby, 2010; Shanks, 2002)

Conceptual Framework: Is the application of more than one theory to capture the complexity
of the research problem (Imenda, 2014; Miles & Huberman, 1994).

Theoretical Framework: The application of a theory or concepts drawn from the same theory to
provide a description of a phenomena or shed light on a given research problem (Gregor, 2006;
Glaser and Strauss, 1967).

Higher Level Education Institution: Also called Tertiary Education is an optional final stage
of formal learning that occurs after completion of secondary education. Often delivered at
universities and colleges (Forest & Kinser, 2002).

Asynchronous Technology: It occurs in delayed or offline mode which does not require the
simultaneous participation of students and instructors (Sabau, 2005).

Synchronous Technology: It occurs in real time by using audio or video conferencing, text chat
and other real time technologies which require the simultaneous participation of students and
instructors (Romiszowski & Mason, 2004).

8| P a g e
Blended Learning: Is defined as the combination of educational instructions of different models
of teaching and learning such as the combinations of conventional teaching system and computer
–based technologies or e-learning (Khan, 2005).

Online Learning: Also called Virtual Learning, Internet Learning, Computer-Assisted Learning
is defined as the use of internet to access learning materials, to interact with the content and the
instructor without limitations of time and place (Arkorful & Abaidoo, 2014; Khan, 2005).

1.6. Significance of the Study

This study developed a model that would provide higher education institutions evaluate their e-
learning systems success for its sustained usages. The study also pointed out important factors
that contribute to the success of e-learning systems and problems related with e-learning systems
implementations in the case of EHEIs.

Apart from the study‘s practical contribution, the research attempted to develop a theoretical
model that can also be used in other similar settings which exhibit related features with that of
EHEIs. To this end; Alvesson & Karreman (2007), Johns (2006) and Eisenhardt (1989) also
emphasized the significance of this type of study‘s contribution by stating that compared to
general theories, theories that are developed on a specific context and examined relevant
constructs and items of the constructs are considered to be significant in contributing deep
understanding of a focal phenomenon and extending theories.

Furthermore, the findings of this study may also serve as a stepping-stone to other researchers for
further investigations in this type of research domain.

1.7. Scope of the Study

The research is delimited to only public and private EHEIs which use e-learning systems for
academic purposes. As e-learning systems is operationally defined in this study as the use of
online system that integrates the conventional or traditional teaching and learning method by use
of e-learning technology/web browser as the main interaction tool, the study is delimited to
EHEIs adopting Learning Management Systems (LMS) in their teaching and learning process.
Accordingly, the success of the e-learning systems is mapped to the success of the specific LMS
usage. Due to this, any other organizations and research institutes which use e-learning systems

9| P a g e
are not included in the study. Besides these, the research is delimited to address factors that
determine e-learning systems success.

1.8. Ethical Considerations

It is the intention of every researcher to conduct an academically sound research study. In


addition to focusing on rigor, a researcher needs to ensure that research procedures are designed
to meet ethical standards.

To address ethical considerations in the research study, the following ethical principles were
followed: approval and access, informed consent, confidentiality and anonymity of research
participants, avoidance of harm to participants, and voluntary participation in the research.

Furthermore, proper acknowledgement and citation have been made for concepts and ideas
reviewed from relevant literature.

1.9. Organization of the Thesis

This thesis is organized into seven chapters. Chapter one deals with the background of the study,
research problem, research question, objective of the study, significance and scope of the study.
The emphasis of this chapter is to present main issues to be addressed in the study. Chapter two
reviews relevant literature for this in line with the study objectives. Chapter three deals with the
overall research methodology of the study which discusses the research design including
methods employed to achieve the overall research objectives. The main objective of this chapter
is to select the most appropriate research methods for this study‘s objectives. Chapter four
discusses qualitative data analysis and findings which results in the development of the proposed
model. The emphasis of this chapter is to develop the proposed model. Chapter five deals with
quantitative data presentation, analysis and findings. The objective of this chapter is to test or
validate the model. The study results of chapters four and five are briefly discussed in Chapter
six. The last chapter presents conclusion of the thesis and implications for theory, practice and a
way forward for future research. Figure 1 depicts the overall structure of the study.

10| P a g e
Extensive readings

Chap 1 & 2
Study scope & Problem statement & Review of Relevant Literature
Problem identification framing study objectives

Chap 3
Overall study‘s Research Methodology

Problem recognition
Qualitative data analysis, findings and development of the proposed model

Chap 4, 5 &6
Model testing and validation

Interpretation of results Discussion of overall


study findings

Conclusion & Research Implications


Chap 7

Fig 1. The Overall Study Structure

11| P a g e
Chapter Two

Literature Review

2.1 Introduction

Advancement in information systems played vital roles in making knowledge to be easily and
instantly shared at a wider and faster pace across the globe. E-learning systems, one of the recent
advancement in information systems discipline, is transforming higher education institutions to a
digital space where education can be accessed at any time and place resulting in cost reduction of
physical infrastructure, fills the gap towards shortage of skilled manpower and the need to recruit
large numbers of skilled teachers/faculties and in effect enhancing effectiveness, efficiency and
quality of education. However, higher education institutions are not exploiting these potential
benefits of e-learning systems due to the fact that its sustained usages are a function of multi-
dimensional factors. Accordingly, this chapter is structured in to five sub-sections: Section 2.2
provides general overview about e-learning; Section 2.3 discusses about learning theories and
implications for e-learning; Section 2.4 presents review of related literature in EHEIs e-learning
systems perspective; Section 2.5 discusses about Technology/ICT mediated educational models
and concepts; Section 2.6 discusses information systems success models; Section 2.7 deals with
e-learning systems success models; Section 2.8 presents the study‘s seed constructs. Finally;
Section 2.9 summarizes the whole chapter.

2.2 Overview of E-learning

As compared to the history of paper based distance education, the history of e-learning is a
recent phenomenon. Though in its very essence: e-learning can mean any electronic mediated
learning, its use becomes more significant after the emergence of the World Wide Web (WWW)
(Kirkwood, 2009; Lin, 2007). The web technology integrates text, audio, and video information
and provides e-learning service through synchronous and asynchronous interaction modes
(Mason, 1998). Before discussing determinant factors of e-learning systems success, it is
important to forward the study‘s operational definitions by simplifying e-learning and e-learning
systems.

12| P a g e
E-learning can be difficult to understand due to the fact that different authors use the term
differently depending on particular professional approaches and interests which ultimately also
contributed to the lack of consensus to factors that determine the success of e-learning (Sangra et
al., 2012).

After reviewing several articles, Romiszowski (2004) also indicated that ―The term „E-learning‟
(or E-Learning, or eLearning, or e-Learning, or e-learning- there being no agreement regarding
how even to write the term) tends to be used. In the one hundred or so articles accessed, the term
was defined nearly 50 times.‖ p.5. He further stressed that presumably the authors just assumed
the reader already knew the right meaning of e-learning but such type of assumption is totally
misleading to e-learning researchers, academia and practitioners.

Still in another several studies, the term e-learning was used differently by different authors as
Web Based Learning (WBL), Web Based Instruction (WBI), Web Based Training (WBT),
Internet Based Training (IBT), Online Resource Based Learning (ORBL), Blended learning,
Advanced Distributed Learning (ADL), Tele-Learning (T-L), Computer-Supported Collaborative
Learning (CSCL), and Mobile Learning (M-learning or ML) (Liaw & Huang, 2011; Kirkwood,
2009; Li et al., 2009; Ellis, 2009; Anderson, 2008; Liao & Lu 2008; Shee & Wang, 2008; Sun et
al., 2008; Mason and Rennie, 2006; Lee & Lee, 2006; Khan, 2005; Bermejo, 2005; Rosenberg,
2001).

Panda and Mishra (2007) defines e-learning as a type of distance education supporting the
teaching and learning process by use of video tapes, television, radios, internet and extranet as
organized by HEI. In this sense, the term e-learning comprises learning and technology. Learning
is a cognitive process to acquire knowledge whereas the technology part is a tool that facilitates
the learning process (Aparicio, 2016). Other researchers define e-learning as a technology based
teaching and learning modality supported by different forms of electronic media including
internet, extranet, intranet, and TV (Becker & Jokivirta, 2007). Others define it as learning via
web-based communication and interactive devices that would make education easily accessible
to learners in spite of time and place restrictions (Albusaidi & Alshihi, 2012; Zanjani et al., 2012;
Carvalho et al., 2011; Mott, 2010; Sclater 2008; Malikowski et al., 2007). Still some other
scholars define e-learning as electronic learning which uses electronic devices for teaching and
learning purpose (Lykins, 2011; Anderson, 2008; Keegan, 2005). In a more restricted sense, e-

13| P a g e
learning systems refer to the use of online system that replicate the conventional didactic features
by use of e-learning technology and web browser as the main interaction tool (Kirkwood, 2009;
Mason and Rennie, 2006). In this sense, the e-learning systems are web-platforms supporting the
teaching and learning process.

In response to the lack of consensus towards definition of e-learning which affected the
understanding of e-learning among various e-learning stakeholders, Sangra et al. (2012)
categorized definitions of e-learning in to four different major groupings after performing an
extensive literature review (i.e. reviewing publications made between 2005 to 2012) and
conducting Delphi survey to validate their proposed definition that may represent the wide arrays
of definitions of e-learning. These four major categories of defining e-learning are:

1. Technology-Driven Definitions: This category represents e-learning as the use of


technology for learning.

2. Delivery-System-Oriented Definitions: These groups define e-learning as a way of


accessing knowledge (through the teaching and learning process) emphasizing on ease of
accessibility to educational resources.

3. Communication-Oriented Definitions: This category considers e-learning to be a


communication, interaction, and collaboration tool and assigns secondary roles to its
other characteristics.

4. Educational-Paradigm Oriented Definitions: This category defines e-learning as a new


way of learning/ an improvement to the existing educational paradigm.

After reviewing these four different categories of e-learning definitions from extant literature
and validating the definitions over e-learning experts (i.e. through Delphi technique), Sangra et
al. (2012) proposed an inclusive definition to represent all these four categories as ―E-learning is
an approach to teaching and learning, representing all or part of the educational model applied,
that is based on the use of electronic media and devices, as tools for improving access to
training, communication and interaction and that facilitates the adoption of new ways of
understanding and developing learning‖ p.152.

14| P a g e
Accordingly, this study operationally adopts the meaning of e-learning (i.e. in a broader sense) as
the use of information technology tools & other technologies for improving access to
communication & interaction facilitating the adoption of new ways of understanding and
developing learning from Sangra et al. (2012) due to the fact that it reflects the diverse views of
e-learning stakeholders. And, e-learning systems in a more restricted sense: the use of online
system (Learning Management System) as its main interaction tool that replicate the
conventional didactic features from the existing view of Kirkwood (2009) and Romiszowski
(2004) in that employing information technology tools directly or indirectly supports learning
(such as collaborative learning, online quizzes, and online exams) as it also constitutes the very
essence of e-learning. In this sense, information technologies or online systems provided by
HEIs for the purpose of facilitating the teaching and learning process are considered as e-
learning systems. This conceptualization is also in line with Masrom (2007) and Rosenberg
(2001) who supported that even the pedagogical thinking around e-learning is closely related to
the computer-based training.

If effectively utilized, e-learning systems has wide range of benefits such as it provides ease of
access to educational services; access to ease of creating, updating and revising e-learning
contents via low-cost or even free online e-learning software (i.e. Learning Management Systems
(LMS)); improves interaction between and among instructors and students; the ability to
integrate the course materials through graphics, audio, and video; ease of access via different
devices such as smart mobile phones, computer or Tabs; collaborative learning; ease of access to
many groups of students at any time through less cost, and provides valuable collaboration
platform to in the sharing of tacit and explicit knowledge among instructors and students (Bindhu
and Manohar, 2015; Arkorful & Abaidoo, 2014; Khan, 2005).

2.3 Learning Theories and Their Implications for E-Learning

According to a review by Skinner (1974), People have been trying to understand learning for
over 2000 years. Learning theorists have carried out a debate on how people learn that began at
least as far back as the Greek philosophers, Socrates (469 –399 B.C.), Plato (427 – 347 B.C.),
and Aristotle (384 – 322 B.C). The debates that have occurred through the ages reoccur today in
a variety of viewpoints about the purposes of education and about how to encourage learning. To

15| P a g e
a substantial extent, the most effective strategies for learning depend on what kind of learning is
desired and toward what ends (Craik & Tulving, 1975).

The nineteenth century brought about the scientific study of learning. Working from the thoughts
of Descartes and Kant, and especially the influence of Charles Darwin, psychologists began
conducting objective tests to study how people learn, and to discover the best approach to
teaching (Skinner, 1974). The 20th century debate on how people learn has focused largely on
behaviorist vs. cognitive psychology (Folden, 2012).

E-learning systems being multidimensional, learning theories have its own impact in the e-
learning realm. There are three major categories of learning theories that have implications for
the design and development of e-learning courses (Ertmer and Newby, 1993). These are the
behaviorism, cognitivism, and constructivism. A blend of these learning theories can be useful
inputs to design and develop e-learning courses (Alzaghoul, 2012).

2.3.1 Behaviorism

This school visualizes the mind as a ―black box,‖ meaning that a response to a stimulus can be
observed quantitatively/as a visible behavior, disregarding the impact of thought processes
available in the mind. Early computer mediated learning systems were designed mainly guided
by this school of thought. Furthermore, it hypotheses that learning is a change in a visible
behavior as a result of external stimuli. Skinner (1975) argue that as it is difficult to prove the
inner mind processes with scientific methods, emphasis must be given to ‗cause and-effect
relationships‘ that could be available by observation. Alzaghoul (2012) & Modritscher (2006)
have forwarded implications of the behaviourist school to e-learning as summarized below:

1- Students need to be notified the explicit outcomes of the learning in order to set their
expectations and better judge for themselves as to the outcome of the online lesson.
2- E-learning course designers need to describe sequences of instructions by use of
conditional or unconditional branching to other instructional units and pre-determining
choices within the course.
3- Students must be tested to evaluate whether or not they have achieved the learning
outcome

16| P a g e
4- Students are supposed to build proficiency from frequent review or revision with check
tests at strategic points or repeat practice with feedback.

However, academicians / instructors argue that not all one learns is visible and visualize learning
beyond a change in behaviour. This has resulted in the move from behaviourist to cognitive
learning approaches (Anderson, 2008).

2.3.2 Cognitivism

The cognitivism school of taught (learning theory) argues that the ―black box‖ of the learner‘s
mind should be opened and understood by emphasizing that learning is an internal process. The
learner is conceptualized as an information processor (i.e just like a computer) implying that
learning is dependent on the learners‘ processing capacity and efforts (Craik & Tulving, 1975).
Cognitive psychology claims that learning involves the use of memory, meta-cognition, thinking,
motivation, and that reflection plays an important part in the learning process (learning). This
learning theory sees learning as an internal process and argue that what someone learns depends
on the processing capacity of the learner, the degree of efforts exerted in the learning process,
and the depth of the processing (Craik & Tulving, 1975; Craik, F. I., & Lockhart, 1972). It also
argues the significance of individual differences, and a diversity of learning strategies in online
instruction to recognize those differences. Learning style is all about how a learner perceives,
interacts with, and responds to the learning environment; overall, it is a measure of individual
differences (Folden, 2012). In this view, knowledge is not just the learners acquire it from
outside but through active process and creating their knowledge through the interpretation and
process of the information that is acquired (Anderson, 2008). In effect, connectivism is a recently
adopted type of learning approach (Downes, 2006).

Implications of the cognitivism paradigm for e-learning are summarized based on Alzaghoul
(2012), Anderson & Elloumi (2004), and Hung (2001) as follows:

1- E-learning/online learning materials should include activities for the different learning
and cognitive styles.
2- The instructor‘s teaching strategy should promote the learning process by facilitating all
sensors, focusing the learner‘s attention by emphasizing relevant and critical information,
reasoning each instruction, and matching the cognitive level of the learner [12].

17| P a g e
3- The instructional designer need to tie up to new information with existing information
from long-term memory using advanced organizers to activate exiting cognitive
structures.
4- The e-learning content need to be properly segmented to prevent cognitive overload.

2.3.3 Constructivism

In late nineties, there was a move towards the constructivist school of taught/ learning theory.
Some studies confirm that this school of taught, which emphasizes on knowledge construction
based on learner's prior experience, is in line with the very essence of e-learning as it promotes
knowledge sharing among learners Modritscher (2006). This school provides more emphasis to
learning as contextual. Learning activities that enhances learners to contextualize the information
need to be employed in e-learning /online instruction Anderson & Elloumi (2004). In most
pedagogies using constructivism, the instructor's role is not only to observe and evaluate but to
also engage with the learners whenever they are completing activities and posing questions to the
students to enhance inductive reasoning (Cooper, 1993; Wilson, 1997).

Experiences and social interactions play vital role during the learning process Anderson &
Elloumi (2004). According to Hung (2001), Modritscher (2006) & Alzaghoul (2012), the
following statements imply this school of taught in e-learning:

1. In order to capacitate learners constructing their own knowledge, e-learning instructors


need to provide adequate and easy interactive online instructions.
2. Learners should be empowered with the learning process. Furthermore, there should be a
form of guided discovery where learners can make their decision on learning goals, but
can also use some guidance from the instructor
3. E-learning Instructors should emphasize on interactive learning activities such as
collaborative and cooperative learning in order to facilitate constructivist learning.
4. Learning should be aided by examples and use cases for theoretical information in a way
that would enhance e-learning course quality.

18| P a g e
2.4 Review of Related Literature in E-learning Systems with the Case of EHEIs

Before presenting relevant literature reviews in e-learning systems with the case of Ethiopian
higher education institutions, it is worth mentioning to briefly discuss about Ethiopia and current
status of Ethiopian Public and Private higher education institutions.

2.4.1 Country Profile

Ethiopia has an estimated population of 105 million widely spread over a territory of about
1.1million sq.km (ECSA, 2018). It is geographically located in the horn of Africa. Its bordering
countries are Eritrea, Somalia, Kenya, South Sudan, Sudan and Djibouti. Although Ethiopia is
one of the fastest growing countries in the world, it is also one of the poorest with a per capita
income of $ 794 (ECSA, 2018). Ethiopia‘s government aims to reach lower-middle- income
status by 2025 through various strategies such as working on physical infrastructure (public
investment projects), transforming the country in to a manufacturing hub, and making access
education to all citizens across all regional parts of the country (World Bank, 2018).

About 80 % of the Ethiopian population is economically active population (10 years & above)
with a fast growing school age population (5-24 years)- which is about 50% of the total
population‘s age structure (Nega, 2017). In the past two decades, the Ethiopian government has
proved strong commitment to expanding equitable access to higher education. The number of
students (under-graduate and graduate-level) enrolled in both public and higher educational level
increased from 585,152 (553,848 & 31,304 are undergraduate and postgraduate students,
respectively) in 2004/2005 to 860,378 (788,033 & 72,345 are undergraduate and postgraduate
students, respectively) in 2016/17 (Educational Statistics, 2017). According to Educational
Statistics Report (Educational Statistics, 2017) published by the ministry of education of
Ethiopia, there are an increasing number of students enrolled in both public and private
educational institutions across the country. Figure 2 & 3 shows this increasing trend of higher
education students in both types of higher education institutions (Source: Educational Statistics,
2017).

19| P a g e
Fig. 2 Trends in Undergraduate Enrollment in Public and Private Higher Education Institutions
(Source: Educational Statistics, 2017)

Fig.3 Trends in Postgraduate Enrollment in Public and Private Higher Education Institutions
(Source: Educational Statistics, 2017)

Although there is an increasing trend of number of students in Ethiopian higher education


institutions, still the coverage is very low compared to the demand for higher education in the
country.

20| P a g e
2.4.2 Application of E-learning in EHEIs

As part of improving the quality of the teaching and learning process in EHEIs, several
initiatives have been implemented including harmonizing curricula for all of the undergraduate
programmes, adopting a modular approach for course delivery so as to enhance active learning,
instituting Quality Assurance Offices at each University and also equipping libraries and
laboratories. In spite of efforts made to avail educational resources to EHEIs, universities still
report insufficient supplies of text and reference books, laboratory and workshop equipment and
access to ICT facilities (Educational Statistics, 2017). To alleviate these pressing problems, the
application of e-learning in the teaching and learning process has been persistently reported as an
important strategy (Educational Statistics, 2017). However, e-learning initiatives in Ethiopian
higher education institutions are challenged by several multi-dimensional factors (Ketema and
Nirmar, 2015; Beyene, 2010).

One of the early initiatives in Ethiopian higher education institutions is the partnership of Curtin
University of Technology, African Virtual University and the Addis Ababa University. In 2003,
Curtin University of Technology (in Australia) launched an e-learning project in Ethiopia,
Kenya, Rwanda, and Tanzania with an investment of $5 million in partnership with the African
Virtual University (AVU) offering under-graduate level business degree programs customized to
local needs of African institutions‘ educational curriculums. The basic objective of this project
was to improve access to education in these selected African countries. The aim was to address a
number of problems related with students‘ poor access to higher level educations; large numbers
of high school dropouts and unskilled labor forces; problem of quality in private higher level
institutions and Africa‘s isolation from the global knowledge society. The launch of this e-
learning project in African universities was an ideal target to meet the growing demand for
quality education and access to education at tertiary level.

This e-learning project - which came from an Australian Aid (AusAid) and World Bank Virtual
Colombo Plan- started at the Addis Ababa University in Ethiopia in December 2003 marking the
inception of e-learning in Ethiopia. Addis Ababa University is one of the oldest universities in
Africa, Curtin University of Technology is Western Australia's largest University and AVU is an
inter-governmental organization based in Nairobi with over 57 e-learning centers in 27 African
countries. AVU launched e-learning degree programs in Australia at Royal Melbourne Institute

21| P a g e
of Technology and in Canada at Université Laval in short-term certificate courses from New
Jersey Institute of Technology and Indiana University of Pennsylvania (UNDP, 2006). The aim
was to set a road map and capacitate these institutions in developing and managing e-learning
and help them to continue with the programs independently afterwards. Learning materials were
accessed through various e-learning technologies such as WebCT, CD ROMs, video-recorded
lectures, e-learning computer laboratories, the AVU Digital Library, Curtin's Online Library,
local facilitators and visiting professors from Curtin. Although the program started with much
enthusiasm and high expectations of success, the project failed and terminated in 2008
graduating only 190 students at under-graduate level at the Addis Ababa University (Belwal et
al., 2010).

According to Ketema & Nirmala (2015), Adama Science and Technology University (ASTU),
being one of the pioneers of e-learning in Ethiopia, started using e-learning system to support its
teaching and learning process since October 15, 2009. The University uses MOODLE as its
Learning Management System. As of 2015, a total of 72 courses were uploaded over the LMS
and 5,356 users are registered out of which 3461 are students, 108 are course instructors, 5 are
system administrators and the rest are guests. The center delivered 32 online trial examinations,
20 online quizzes, 15 online final examinations, 57 forums and 15 assignments (information
obtained from ASTU‘s e-learning center, 2018). However, e-learning implementations and its
sustained usages are mainly challenged by poor ICT infrastructure and resistance by
traditionalists (those in favor of the traditional teaching and learning process). In 2015, the
University signed an agreement with the Korea International Cooperation Agency (KOICA) to
work on new e-learning initiatives by investing in e-learning content developments for all
courses of the university‘s schools and colleges, building data centers, establishing a dedicated e-
learning labs, e-studios, training all e-learning stakeholders, and availing relevant e-learning
infrastructure in order to successfully implement the university‘s LMS (i.e. Moodle) as
supporting the teaching and learning process of the university. These attracted in rework costs as
a result of discarding the former e-learning project sponsored by GTZ. Even if a dedicated e-
learning center was established at the University, a study by Ketema & Nirmala (2015) showed
that the e-learning initiative which started in 2015 at ASTU is still challenged by various factors
while sustainably implementing e-learning at University-wide level.

22| P a g e
Other EHEIs such as Jimma University, Hawassa University, Bahirdar University, Addis Ababa
University, Arbaminch University, AMBO University, Ethiopian Civil Service University,
Haramaya University, and PESC Information Systems College are some of the higher education
institutions adopting MOODLE as their LMS in using e-learning to support their teaching and
learning process (Hagos et al. 2018; Ketema & Nirmala, 2015 and information obtained from
ministry of education of Ethiopia and concerned e-learning offices of the HEIs). However, the
implementation of e-learning to support the teaching and learning process still poses many
challenges. Beyene (2010; 2006) explained that the implementation of e-learning systems in
EHEIs is characterized by various failure factors and there are also limited studies conducted to
infer about the practices of e-learning at EHEIs. He further stated that problems associated with
e-learning in EHEIs are usually perceived as technological factors. To this end, he recommended
further studies need to be conducted to develop a holistic model based on different views of e-
learning stakeholders of EHEIs.

Hagos & Negash (2014) have conducted a study to find out students‘ adoption of e-learning
systems in EHEIs based on the Technology Acceptance Model (TAM) and they found out that
students‘ further use of e-learning is dependent on acceptance factors such as perceived
usefulness and perceived ease of use of the system.

It can be understood that there is a growing demand for use of e-learning among EHEIs to
support the teaching and learning process. However, there remain huge challenges that hinder the
successful implementations of e-learning systems and its sustained usages. This study attempted
to address this problem by identifying e-learning systems success factors and proposed a success
model.

2.5. Technology/ICT Mediated Educational/Learning Models and Concepts

This section describes pedagogical initiated models focusing on new educational technologies
(including ICT mediated learning) as to be integrated in to the teaching and learning process of
HEIs.

23| P a g e
2.5.1. Technological Pedagogical Content Knowledge (TPACK)

This framework was built on Lee Shulman‘s (1986, 1987) construct of pedagogical content
knowledge (PCK) to include technology knowledge. This framework is summarized based on
Mishra & Koehler (2006) and Koehler et al. (2013) as follows:

The development of TPACK by teachers is vital to integrate technology for teaching. As


educators emphasize, teaching is a complicated practice that involves an interweaving of many
kinds of specialized knowledge. Teaching with technology is complicated further when the
challenges newer technologies present to teachers are considered. The term technology in the
framework applies both to analog and digital technologies including older and newer
technologies. Integration of technology for teaching involves three core components: content,
pedagogy, and technology, including the relationships among and between them. However, the
inter-relationship among the three components is context dependent. These three knowledge
bases (content, pedagogy, and technology) form the TPACK framework. Equally important to
the knowledge bases components of the model are the interactions between and among these
bodies of knowledge, represented as PCK (pedagogical content knowledge), TCK (technological
content knowledge), TPK (technological pedagogical knowledge), and TPACK (technology,
pedagogy, and content knowledge).

The following pointes further brief about these model components and their interactions based
on Angeli and Valanides (2009), Koehler & Mishra, (2009), Koehler & Mishra (2008) and
Graham (2011):

1. Content knowledge (CK) is concerned with the knowledge of the teachers with respect
to the subject matter of the course to be learned or taught. Knowledge of content is of
critical importance for teachers. As Shulman (1986) noted, this knowledge includes
concepts, theories, ideas, organizational frameworks, evidence and proof, as well as
established practices and approaches toward developing such knowledge.
2. Pedagogical Knowledge (PK) is the knowledge of the teacher with regard to the
processes and methods of teaching and learning which mainly includes understanding
how students learn, classroom management skills, lesson planning, and overall student

24| P a g e
assessment. A teacher with this base of knowledge would be in a better position to
understand how students form knowledge and acquire skills.
3. Pedagogical Content Knowledge (PCK) is related to Shulman‘s (1986, 1987) idea of
knowledge of pedagogy as applicable to teaching specific content. According to Shulman
(1986), the teacher interprets the subject matter with multiple ways to represent it, and
employing instructional materials to alternative conceptions and students‘ prior
knowledge. PCK encompasses the main functionalities of teaching, curriculum, learning,
assessment, and linking among curriculum, assessment, and pedagogy.
4. Technology Knowledge (TK) is always in a state of dynamic change—more dynamic in
state of change than the other two core knowledge components of TPACK framework
(i.e. pedagogy and content).This component is difficult to define than the other
components due to its dynamic and changing features of the concept. The definition of
TK provided in the TPACK framework is similar to that of Fluency of Information
Technology (FITness), as proposed by the Committee of Information Technology (IT)
Literacy of the National Research Council (NRC, 1999).They argue that FITness
encompasses beyond traditional notions of computer literacy to require that persons to
deeply understand IT to productively apply at work and in their day-to-day working lives.
FITness, therefore, is more related with a deeper, meaningful understanding and mastery
of IT for information processing, communication, and problem solving than the mere
definition of computer literacy.
5. Technological Content Knowledge have a profound historical relationship. For
instance, the continuous advancement of digital computer impacted the nature of physics
and mathematics meaningfully emphasizing on the role of simulation in mastering the
phenomena. Technological advances have also provided new metaphors for
understanding the world. Viewing the heart as a pump, or the brain as an information-
processing machine are just some of the ways in which technologies have provided new
perspectives for understanding phenomena. These representational and metaphorical
connections are not superficial. They often have led to fundamental changes in the
natures of the disciplines. Course instructors / subject matter experts need to vigorously
understand the type of specific technologies which can be best suited to address the

25| P a g e
subject-matter learning in their domains and how the content impacts the technology—or
vice versa.
6. Technological Pedagogical Knowledge (TPK) is an understanding or representation of
how teaching and learning can be aligned whenever particular technologies are employed
in particular ways or situations. As part of this process, it pre-requisites the pedagogical
affordances and constraints of a range of technological tools as they relate to
disciplinarily and developmentally appropriate pedagogical designs and strategies. To
build TPK, a deeper understanding of the constraints and affordances of technologies and
the disciplinary contexts in which it is applied is required. TPK is quite vital as most
software programs are not designed for specific educational purposes. Software programs
such as the Microsoft Office Suite (Word, PowerPoint, Excel, Entourage, and MSN
Messenger) are usually designed for business environments. Web-based technologies
such as blogs or podcasts are designed for some other purposes such as for entertainment,
and social networking. Course instructors need to filter-out functional fixedness
(Duncker, 1945) and develop skills to contextually fit technologies or customize them for
pedagogical purposes. To this end, TPK requires a forward-looking and creative
approach of using technology for educational activities, not for its own sake but for the
better use of it in the teaching and learning process to enhance better understanding for
students.

Technological Pedagogical Content Knowledge (TPACK) is an evolving form of knowledge


beyond just comprising all three ―core‖ components (content, pedagogy, and technology); it is an
understanding of the interactions among content, pedagogy, and technology knowledge (Koehler
et al., 2013). Taking in to account understanding the technology for teaching, the concept of
TPACK goes beyond the knowledge of each of the three concepts individually. Most
importantly, TPACK is more concerned related to teaching with technology, requiring an
understanding of the representation of concepts using technologies, pedagogical techniques that
use technologies in constructive ways to teach content, knowledge of what makes concepts
difficult or easy to learn and how technology can help redress some of the problems that students
face, knowledge of students‘ prior knowledge and theories of epistemology, and knowledge of
how technologies can be used to build on existing knowledge to develop new epistemologies or
strengthen old ones (Koehler & Mishra, 2009). By simultaneously integrating knowledge of

26| P a g e
technology, pedagogy, content, and the contexts within which they function, expert teachers
bring TPACK into play any time they teach (Koehler & Mishra, 2008). Each situation presented
to teachers is a unique combination of these three factors, and accordingly, there is no single
technological solution that applies for every teacher, every course, or every view of teaching
(Koehler et al., 2013). Rather, solutions lie in the ability of a teacher to flexibly navigate the
spaces defined by the three elements of content, pedagogy, and technology, and the complex
interactions among these elements in specific contexts (Koehler & Mishra, 2009). Ignoring the
complexity inherent in each knowledge component or the complexities of the relationships
among the components can lead to oversimplified solutions or failure (Koehler et al., 2013;
Mishra &Koehler, 2006).

Teaching with technology is difficult to do well. The TPACK framework suggests that content,
pedagogy, technology, and teaching/learning contexts have roles to play individually and
together (Koehler & Mishra, 2009). Teaching successfully with technology requires continually
creating, maintaining, and re-establishing a dynamic equilibrium among all components. It is
worth noting that a range of factors influences how this equilibrium is reached (Koehler &
Mishra, 2008). Figure 4 present the TPAK framework.

Fig. 4 The TPACK Framework (Koehler et al., 2013)

27| P a g e
Several studies stressed that there is no ―one best way‖ to integrate technology into curriculum
(Hung, 2001). Rather, integration efforts should be creatively designed or structured for
particular subject matter ideas in specific classroom contexts (Koehler et al., 2013). Furthermore,
the framework is criticized due to the fact that TPACK lacks theoretical clarity, parsimony (i.e.
lack of a balance between parsimony and complexity for the model is not clear), lack of
operational constructs and provides little value for researchers interested to examine the effect of
each components one on another apart from describing the relation (Graham, 2011). Even, what
specific technologies are considered is not clear. To this end, other researchers such as Angeli
and Valanides (2009) used the term ICT-TPCK to explain the focus on the use of information
and communication technologies (ICT). However, this ICT-TPCK also replicates the same
inherent limitations observed over the TPAK other than emphasizing the ICT component
(Graham, 2011). Although the TPACK framework with its some inherent limitation is
predominantly related with how teachers can integrate technology for teaching, our study
considers the paramount significance of the TPACK concepts while assessing e-learning systems
success from the instructors‘ perspective. However; since the objective of the study is to develop
a holistic e-learning systems success model beyond the concerns of instructors or teachers,
TPACK partially addresses towards meeting the objective of the study.

2.5.2 Computer Self-Efficacy Concept

Before discussing about Computer self-efficacy, it is important to discuss first about the concept
of self efficacy which is an important construct in Social psychology. According to Bandura
(1986), Self-efficacy is defined as one‘s judgement about his or her capabilities to organize and
perfume courses of actions required to achieve a desired goal. This definition explains the Self-
efficacy construct (Eachus & Cassidy, 2006). The definition of this construct explains that the
concept of self-efficacy involves distinguishing one‘s component skills and the capability to
organize and execute courses of action (Bandura, 1986). Self-efficacy or one‘s self assessment
perceptions impacts individuals decisions on the kind of behaviors they undertake and the
emotional responses and actual performance to certain courses of actions (Bandura, 1982). Self-
efficacy depends on various factors such as prior experience related with success or failure, some
sort of vicarious or shocking experience, taking lessons from some others‘ success or failure
experiences, anxiety or some other form of emotional states related to a certain event or activity

28| P a g e
(Bandura, 1986). Since self-efficacy is about self-perceptions with respect to specific
behaviours, the self-efficacy construct is deemed to be case or situation dependent. The impact of
self-efficacy has been studied in variety of situations such as academic achievement (Cassidy &
Eachus, 2002), health issues (Schwarzer, 1992), selection of investment portfolios or stocks
(Eachus & Cassidy, 2006) and level of computer use (Cassidy & Eachus, 2002).

Derived from Bandura‘s (1986) social cognitive theory in which he defines the self-efficacy
construct, computer self-efficacy can be defined as a judgment (assessment) of one‘s ability to
apply computer skills to perform a given task or its emotional response to computers (Compeau
& Higgins, 1995; Venkatesh & Davis, 1996). From this conceptualization process, it can be
understood that computer self-efficacy concept is about what the individual can do in the future
with the specific computer skill he or she possesses. Judgment is about what you can do based on
your past experience (Bandura, 1986). In the context of computer it is to mean that can the
individual apply his or her specific computer skill to a broader task? A good example might be
can the individual apply basic computer skills to apply those specific skills in the web-
environment, manage external devices with the computer or some other related broader computer
tasks or managing some accounting formulas in a spreadsheet? (Cassidy & Eachus,2002)
Therefore, computer self-efficacy is related to computer management ability than to specific
computer skills in IT. Bandura (1986) emphasized that that the perception that one possesses the
capability to perform a given task will lead to the possibility of a higher degree of accomplishing
the task successfully. The same is true in IT usage context (Compeau & Higgins, 1995).

A study show that as many as fifty percent of first year students have some kind of computer-
related phobia (Saade & Kira, 2009). This report indicates that although students possess some
sort of basic computer skills, they tend to judge that they are not capable of extending their
possessed skill to a broader computer task such as on-line learning. On another similar study, it is
reported that when users acquire some form of computer experience, they might change their
beliefs, attitudes, and computer or IT usage implying that one‘s previous computer usage
experience is one of the critical factors for further usage of broader IT tasks. Various studies
such as Compeau et al. (1999) have found out that computer self-efficacy construct plays critical
role in further use of IT. Cassidy and Eachus (2002) also found out that self-efficacy plays
critical role in IT usage. In another study by Blocher et al (2002) to identify factors to online

29| P a g e
learning systems success metrics, they have concluded that students who have confidence in their
computer skills tend to finish the online program than those with lesser confidence even if both
have similar prior computer skills. Thus, it can be inferred that the concept of computer self-
efficasy plays paramount significance towards the success of e-learning systems success.

2.5.3 Badrul Khan’s Framework for Blended Learning

Singh (2003) emphasized that Badrul Khan‘s blended e-learning framework which is also known
as Khan‘s Octagonal Framework is a useful framework for blended learning. The framework is
shown in Figure 5. Singh (2003) also stresses that Khan‘s framework serves as a guide to plan,
develop, deliver, manage, and evaluate blended learning programs. They also indicated that
higher learning institutions requiring workable strategies for effective successful teaching and
learning process employing e-learning have to consider different factors that can enhance better
understanding and ease of access to learning which will ultimately result in a high return on
investment.

Fig. 5 Khan‘s Octagonal Framework (Khan, 2005)

Al-Huwail et a. (2007) argued that the major limitation of Khan‘s Octagonal Framework is that it
does not include cultural and contextual factors. It is worth noting that although the framework
provided the cultural aspect under the Ethics group; however, it is strongly criticized that the
cultural aspect is one element of the critical cultural factors of a successful blended e-learning.
Al-Huwail et al. (2007) further suggested that the Framework need to include other factors
depending on the contextual issues of phenomena. It was also noted that the framework does not

30| P a g e
clearly indicate the impact level of each factors and does not also address the needs of HEIs
intending to purely go online.

2.6 Information Systems Success Models

Information systems are developed using Information Technology in order to enhance the
performances of individuals and organizations. According to Petter et al., (2008) the impacts of
IT on organizational performance are often indirect and influenced by people, the organization,
and other environmental factors. Measuring information systems success (i.e. at both individual
and organizational levels) goes beyond the conventional financial measures, such as return on
investment (Rubin, 2004). To measure the benefits of information systems, organizations have
adopted different methodologies such as balanced scorecards and benchmarking (Seddon et al.,
2002; Kaplan & Norton, 1996). However, measurement of information systems success is both
complex and illusive (DeLone & Mclean, 2003). To this end, researchers have tried to develop
various models to explain metrics that can determine the success of information systems.

The technology acceptance model of Davis (1989) derived from the Theory of Reasoned Action
(TRA) and Theory of Planned Behavior is one of the robust models that explain factors
contributing to the acceptance of information systems. Petter et al., (2013; 2008) argue that
Users‘ acceptance to a given information systems does not imply success of an information
system, even if acceptance of an information system is a necessary precondition to success
(Petter et al., 2008). They further stated that ―early attempts to define information systems
success were ill-defined due to the complex, interdependent, and multi-dimensional nature of IS
success‖ p.237. Figure 6 depicts basic concepts underlying acceptance models.

Individual reactions to Intentions to use Intentions to use


using information information technology information technology
technology

Fig.6 Basic Concepts Underlying Acceptance Models (Venkatesh et al., 2003)

31| P a g e
Recognizing the gap in literature and practice with regard to the lack of measures of information systems
success, DeLone & Mclane (1992) reviewed theoretical and empirical IS research publications during the
periods between 1970s and 1980s as related to measures of information systems success and come up
with a taxonomy of IS success. In their 1992 ISS model, they identified six constructs of ISS namely:
system quality, information quality, use, user satisfaction, individual impact, and organizational impact.
Figure 7 shows the DeLone‘s & McLean‘s original ISS model.

According to the original IS model (i.e. DeLone & McLean, 1992), an information systems is first created
containing various features depending on the type of the system which can be characterized with various
degrees of systems quality and information quality. Next, depending on the use of the system, users are
satisfied or dissatisfied with the system or the information outputs. These will in turn influences user in
his or her task and these individual impacts collectively result in organizational impacts.

Use
System
Quality
Individual Organizational
Impact Impact

Information User
Quality Satisfaction

Fig.7 The Original ISS Model (DeLone and McLean, 1992)

After the DeLone and McLean 1992 model, some IS researchers began criticizing the model and
proposed modifications to be considered in to this model (Pitt et al., 1995; Seddon & Kiew,
1996; Seddon, 1997; Jiang et al.,2002; Jennex & olfman,2002, Zhu & Kraemar, 2005; Kulkarni
et al., 2006). Considering proposed modifications by variety of array of researchers and the ever
changing role of IS impacting individuals and organizations, DeLone & McLean further
reviewed empirical researches published by various IS researchers during the years since 1992
and released an updated model in 2003 and re-named it ‗the updated ISS model or DeLone and
McLean (2003) model‘. The updated ISS model is shown in figure 8. This model incorporated
the Pitt et al (1995) proposition of including Service Quality as a new independent variable.

32| P a g e
Further inclusion made in the updated model based on criticisms forwarded from IS researchers
were to replace individual impact and organizational impact constructs with a new construct Net
benefits. The justification made to this revision is that an information system can affect levels
beyond individual and organization impacts such as impacting workgroups, industries,
consumers and societies (DeLone & McLean, 2003; Seddon et al., 1999). Final revision also was
made to the Use construct. DeLone & McLean provided further explanation about this construct.
The more the user is satisfied by using the system, the higher intention to use of the system
which will ultimately impact use. They have further explained that the construct Use must be
retained.

Accordingly, the ISS model remains one of the robust models organizing information systems
success measures , in general, and e-learning systems success, in particular (Yakubu & Dasuki,
2018; Aparicio et al., 2016; Bindhu & Manohar, 2015; Wu & Zhang, 2014; Petter et al., 2013;
Dorbat, 2014; Petter et al., 2008, Lin, 2007; Holspale &Lee-Post, 2006). These dimensions of the
updated ISS model are defined by the authors of the ISS model as follows:

Systems Quality: it explains the desirable characteristics of an IS which measures the technical
success of the IS. These characteristics are further explained in terms of ease of use, system
flexibility, system reliability, ease of learning, sophistication and response times.

Information Quality: this construct explains the characteristics of the system outputs in terms
of accuracy, conciseness, completeness, currency, up to date and usability.

Service Quality: this construct explains the support provided by the IT department or IT
services to the users of the system. It is further explained in terms of timely response, technical
competence and empathy of the concerned IT personnel providing the support service.

System Use: it explains the extent to which users of the system use it to effectively and
efficiently perform their tasks. This is measured in terms of frequency of use of the system.

User Satisfaction: this construct explains the level of satisfaction with the use of the system as
impacted by system quality, service quality, and information quality of an information system.

Net benefits: this measure of information system is explained in terms of the success of
individuals, organizations or concerned groups as a result of use of the system. This can be

33| P a g e
explained in terms of better decision making, enhancing productivity, reducing time and costs,
and improved profits. It is the cumulative systems outcome as explained in terms of positive
systems outcome/benefit as in contrast to its negative outcomes or consequences.

System
Quality Intention to Use
Use

Information
Quality
Net Benefits

User Satisfaction

Service
Quality

Fig.8 The Updated IS Success Model (DeLone and Mclean, 2003)

2.6.1 Empirical Studies Validating the DeLone and McLean Models

DeLone & McLean pointed out that the practical application of the ISS model is dependent on
the organizational context. Researchers adopting the ISS model in their study must closely
understand the organization under study. They further explained that the model is applicable in
variety of information systems and organizational contexts but still the limit of the model is not
yet well known which requires further model testing and validation with regard to all the
constructs and their inter-relationships. Table 1 shows some of the scientific IS scientific papers
conducted based on the original and updated ISS models.

34| P a g e
Table 1: IS Empirical Studies Validating the DeLone & McLean Models

Scientific IS papers Vlahos & Ferratt (1995); Yoon et al. (1995); Choe (1996); D‘Ambra
employing the original & Rice (2001); Hong et al. (2001); Bharati (2002); Caldeira & Ward
ISS model during years (2002); Chau & Hu (2002); Devaraj et al. (2002); Gelderman (2002);
1992-2002 Kim et al. (2002); Palmer (2002); Rai et al. (2002); Shaw et al.
(2002)

Scientific IS papers Wu & Wang (2006); Hsieh & Wang (2007); Klein (2007);Leclercq
employing the updated (2007); Lin (2008); Lee (2009); Bhuasiri et al. (2012); Lwoga (2014);
ISS model Bindhu & Manohar (2015), Apparicio (2016b), Tossy et al. (2017);
Yakubu & Dasuski (2018).

DeLone and McLean (2003) emphasized that the relationship among the constructs of the model
need to be continuously tested by emphasizing that the model is still open for further validation
and extension. In response to this call for further research, many IS studies validated the model
and come up with different results such as significant and non-significant relationship between
the constructs of the ISS models. According to the empirical studies listed in Table 1 above,
positive relationships was reported between constructs of Systems Quality and User
Satisfaction (Chiu et al., 2007; Halawi et al., 2007; Hsieh & Wang , 2007; Leclercq, 2007;
Kulkarni et al., 2006; Wu & Wang , 2006; Almutairi & Subramanian , 2005; Iivari, 2005; McGill
& Klobas, 2005; Wixom & Todd, 2005; McGill et al., 2003; Bharati, 2002; Devaraj et al., 2002;
Gelderman, 2002; Kim et al.,2002; Palmer, 2002; Rai et al. ,2002; Guimaraes et al., 1996; Yoon
et al.,1995; Seddon & Yip, 1992). Similarly, Seddon & Kiew (1996) conducted their study on
104 users of a University accounting information system and found significant relationship
between the two constructs. Rai et al. (2002) also conducted a survey on 274 users of a
University student information systems and found out that path coefficients among the two
constructs were significant. In another IS studies, positive relationship was observed between
System Quality and Use constructs in empirical studies such as in Halawi et al. (2007), Hsieh &
Wang (2007), Rai et al. (2002), Hong et al. (2002), Venkatesh & Morris (2000), and Igbaria et
al. (1997). In contrast, non – significant relationships were reported between Systems Quality
and Use constructs in empirical studies of Klein (2007), McGill et al. (2003), Lucas & Spitler

35| P a g e
(1999), Gefen & Keil (1998), Straub et al. (1995), Markus & Keil (1994), and Subramanian
(1994).

Positive relationship was reported between Information Quality and User Satisfaction in
studies by Chiu et al. (2007), Halawi et al. (2007), Leclercq (2007), Kulkarni et al. (2006), Wu
& Wang (2006), Almutairi & Subramanian (2005), Iivari (2005), Wixom & Todd (2005), McGill
et al. (2003), Bharati (2002), Kim et al. (2002), Palmer (2002), Rai et al. (2002), Seddon & Kiew
(1996) and Seddon & Yip (1992). With regard to the relationship between Information Quality
and Use constructs, positive relationship was reported in several studies such as in Halawi et al.
(2007), Kositanurit et al. (2006) , McGill et al. (2003) and Rai et al. (2002) and; in contrast, non-
significant relationship reported in studies by Iivari (2005) and McGill et al. (2003).

Halawi et al. (2007), Leclercq (2007), Shaw et al. (2002), Yoon et al. (1995), Kettinger & Lee
(1994), and Leonard & Sinha (1993) reported positive relationship between Service Quality and
User satisfaction constructs. In another studies by Chiu et al. (2007), Marble (2003), Aladwani
(2002), Palmer (2002) and Choe (1996), non-significant relationship was reported between these
constructs. The relationship between Service Quality and User Satisfaction constructs was also
examined and reported as non-significant relationship in studies by Halawi et al. (2007),
Kositanurit et al. (2006) and Choe (1996).

The relationship between User Satisfaction and Use constructs were also examined by several
number of studies reporting positive relationship in studies by Chiu et al. (2007), Halawi et al.
(2007), Bharati & Chaudhury (2006), Kulkarni et al. (2006), Wu & Wang (2006), Iivari (2005),
Wixom & Todd (2005), McGill et al. (2003), Kim et al. (2002), Rai et al. (2002), Torkzadeh &
Doll (1999), Khalil & Elkordy (1999), Winter et al. (1998), Yuthas & Young (1998), Abdul
(1997), Guimaraes & Igbaria (1997), and Igbaria & Tan (1997). In contrary to this significant
relationship, non-significant relationship was reported between these constructs in studies by
Vlahos et al. (2004) and Ang & Soh (1997).

Empirical studies of Halawi et al. (2007), Iivari (2005), McGill & Klobas (2005), Vlahos et al.
(2004), and McGill et al. (2003) reported positive relationship between User Satisfaction and
Net benefits constructs.

36| P a g e
Positive relationship was also reported between constructs of Use and Net benefits in studies
such as Halawi et al. (2007), Burton & Straub (2006), Kositanurit et al. (2006), Almutairi &
Subramanian (2005), and Vlahos et al. (2004). In contrary to these studies, non-significant
relationship between these constructs was reported by Wu & Wang (2006), Iivari (2005), and
McGill et al. (2003).

In response to the above empirical study results, DeLone & McLean (2003) recommended that
researchers need to challenge the model across various settings implying that the model is
context dependent.

2.7. E-learning Systems Success Models

E-learning is a special form of information systems. To this end, information system success
models and theories are commonly used in many studies that investigate e-learning systems
success (Tossy et al., 2017; Apparicio et al., 2016a; Bindhu & Manohar, 2015; Dorbat, 2014;
Wang et al., 2007).

One of the e-learning studies employing the updated ISS model was developed by Holsapple and
Lee-Post. The authors developed an e-learning success model in 2006 by adapting the updated
D&M model so as to measure the success of e-learning systems comprising three components of
e-learning development phases: Design, development, implementation and outcome of the
system (see Figure 9). The study used action research to identify success factors in designing,
developing, and delivering e-learning initiatives taking 330 traditional students and 39 online
students across two semesters as a case study. The authors have recommended further research in
order to validate the model in variety of contexts employing survey research methods.

37| P a g e
System Design System Delivery
System Quality Use
1. Easy-to-use
2. User friendly 1. Power Point slides
3. Stable 2. Audio
4. Secure System outcome
5. Fast 3. Script Net Benefits
6. responsive 4. Discussion board Positive Aspects

5. Case studies 1. Enhanced learning

6. Practice problems 2. Empowered


Information Quality
7. Excel tutorials 3. Time savings
1. Well organized
2. Effectively 8. Assignments 4. Academic success
presented
9. Practice exam Negative Aspects
3. Of the right length
1. Lack of contact
4. Clearly written
5. Useful 2. Isolation
6. Up-to-date
3. Quality concerns
4. Technology
Service Quality User Satisfaction dependence
1. Prompt
2. Responsive 1. Overall satisfaction
3. Fair 2. Enjoyable experience
4. Knowledgeable
5. Available 3. Overall success
4. Recommend to others

Fig. 9 E-learning Success Model (Holsapple and Lee, 2006)

In 2009, Lee-Post further extended the e-learning success model developed by Holsapple and
Lee (2006) through action research and prototyping strategy of developing online system for a
quantitative course/subject of study to under-graduate business students and measuring success
factors across the three processes of the model: System design, System delivery and System
outcome adapted from the updated DeLone & McLean model (see Figure 10). The study
founded out that students and instructors need to have a strong institutional support in order to
enhance e-learning success. However, the model is still limited in scope in that it assumes
students and instructors have strong experience in e-learning and the e-learning environment is
so conducive to run online courses in which case this might not be true in HEIs characterized by
less e-learning developments or infrastructure such as those in developing countries.
Accordingly, the author recommended further research for model testing and validation in
different contexts.

38| P a g e
Institutional
System Design
Outcome
Student Outcome 1. Cost savings
- System quality System Delivery - Net Benefits 2. Higher rankings
-Information - Use - Satisfaction 3. Increased
Quality enrollment
-Service Quality

STUDENT

INSTRUCTOR

INSTITUTION

Fig.10 Extended E-learning Success Model (Lee, 2009)

Lin (2008) has identified success factors for e-learning systems success of an online community
by adopting the updated DeLone & McLean to social contexts of online communities. The
success constructs in the D&M model were replaced by social factors such as: the construct
―Use‖ by ―Sense of belonging construct‖, the constructs ―System Quality‖ & ―Information
Quality‖ were categorized under system characteristics and introducing two new constructs of
―Trust‖ and ―Social Usefulness‖ categories under Social Factors. The findings of the study show
that System Characteristics construct has a significant effect on Sense of Belonging. On the other
hand, only Trust has a significant effect on Sense of Belonging (see Figure 11). In effect, the
study was predominantly limited to view the social factors side of e-learning systems success
lacking holistic view.

39| P a g e
System characteristics
System quality
Member
satisfaction
Information quality

Member
loyalty
Social factors

Trust Sense of
belonging
Social usefulness

Not significant

Fig.11 Virtual Communities‘ Model (Lin, 2008)

Hassanzadeh et al. (2012) adapted the updated D&M model and attempted to contextualize it in
e-learning systems success. The Systems Quality construct of ISS model was further branched
out in to Technical system quality and Educational system quality, Information Quality replaced
by Content and information quality. Constructs: Service Quality, User satisfaction, Intention to
Use, Use of System and Benefits of System Use were retained. They called their model
―Measuring E-leaning Systems Success Model‖ (see Figure 12). The study took 33 experts‘
opinions to further validate the constructs of the model and finally, tested the relationships of the
constructs in 5 universities (namely: Amir kabir University, Tehran University, Shahid Beshti
University, Iran University of Science & Technology and Khaje Nasir Toosi University of
Technology) by analyzing quantitative data collected from a total of 369 respondents (i.e.
instructors and students). The study findings show that Technical System Quality, Content and
Information Quality and Educational Quality constructs have significant effect on User
Satisfaction construct; Educational System Quality has significant influence on Service Quality
and vice versa; User Satisfaction has significant influence on Intention to Use and Intention to
Use has in turn a significant effect on Use of System. In short, the solid arrows indicated in
Figure 10 have significant relationship with their respective constructs. However, the construct

40| P a g e
Service Quality has no significant effect on Intention to Use (as depicted in dotted lines). The
researchers recommended that future research need to continuously test the interrelationship
among the model constructs in different contexts to generalize the study findings to a wider
population and research settings.

Technical system User satisfaction Achieving the goals


quality

Content and information


quality

Benefits of System
Intention to use use

Educational system
quality

Use of System Loyalty


System
Service quality

Fig. 12 Measuring E-learning Systems Success Model (Hassanzadeh et al., 2012)

In another study by Bhuasiri et al. (2012), critical factors determining e-learning systems success were
assessed by using motivational theories, social cognitive theory, the technology acceptance model(TAM),
and the updated DeLone & McLean model. Accordingly, Learners‘ Characteristics, Instructors‘
Characteristics, Institution and Service Quality, Infrastructure and System Quality, Course and
Information Quality, and Extrinsic Motivation have been identified as determinant factors for e-learning
systems success (see Figure 13). The study collected 76 usable responses from faculty and ICT experts
employing Delphi method and Analytic Hierarchy Process (AHP) approach. However, the study was
limited to examine these critical success factors from only these two e-learning stakeholders. The authors
have stated that other researchers need to take in to account other groups of e-learning stakeholders to
develop a holistic model of e-learning systems success model.

41| P a g e
Factors
Objectives Dimension
Computer self-efficacy

Learners‘ characteristics Internet self-efficacy

Attitude toward e-learning


Critical success factors of e-learning in developing countries

Timely Response

Self-efficacy
Instructors‘ characteristics
Technology control
Focus on interaction
Attitude toward students
Interaction fairness
Computer training
Institution and service
quality Program flexibility
Internet access quality

Reliability
Infrastructure and system quality
Ease of use
System functionality
System interactivity
System response
Course quality
Course and information Relevant content
quality
Course flexibility

Perceived usefulness
Extrinsic motivation
Clear direction

Fig.13 E-learning Hierarchical Model (Bhuasiri et al., 2012)

Lwoga (2014) adapted the updated DeLone & McLean model to identify factors that predict
students' continual usage intention of web-based learning content management systems in
Tanzania, at Muhimbili University of Health and Allied Science (MUHAS). The study collected
408 usable questionnaires from first year under-graduate students and analyzed its data using
Structural Equation Modeling (SEM). The study findings show that quality factors (Information
quality and Systems quality) of the adapted DeLone & McLean model have significant effect on

42| P a g e
perceived usefulness and user satisfaction constructs. Perceived usefulness construct has
significant effect on User satisfaction and User satisfaction has in turn significant effect on
continual usage intention. However, Service quality did not have significant effect on Perceived
Usefulness and User Satisfaction constructs. The study has introduced new construct ―Instructor
Quality‖ to the updated DeLone & McLean model and it was found out that Instructor Quality
has significant effect on Perceived usefulness and User satisfaction constructs (see Figure 14).
However, the study was criticized due to the fact that it measures perceptions of students on the
University‘s web-based learning management system than actual systems users (Crossler et al.,
2013; DeLone & McLean, 2003).

Information Perceived usefulness


quality

System quality
Continual
usage
intention
Service quality

User satisfaction

Instructor quality

Fig.14 Lwoga‘s Model (2014)

Apparicio et al. (2016b) extended the original DeLone &McLean model by adding a new
construct individualism/ Collectivism construct. This study attempted to study the impact of
students‘ cultural characteristics, for individualism/collectivism, on the perceived outcomes of e-
learning systems use. The study proposed an e-learning systems success model that includes a
cultural construct: ―individualism/collectivism‖. The study employed quantitative methods to
achieve the study objectives. It was reported that learners‘ perceived individual impact is
positively influenced by their satisfaction and e-learning systems‘ use. However, the study lacks
holistic nature due to its sole focus on students without taking views of other key e-learning
stakeholders while identifying determinants of e-learning systems success factors. Furthermore,

43| P a g e
culture as determinant of e-learning systems success was the basic focus of the study. Students‘
characteristics were not explored qualitatively (such as use of mixed methods) in order to deeply
understand determinants of e-learning systems outcome by triangulating the study findings.

Tossy et al. (2017) developed a model to measure e-learning impact level on students‘
achievements by adopting DeLone and McLean (2003) model and UTAUT model (Venkatesh et
al., 2003). The study took 30 Public HEIs adopting e-learning systems with the case of
Tanzanian HEIs. Data were collected from 350 students across these HEIs. It used quantitative
methods which was limited only to public HEIs in Tanzania which does not represent the
contexts of private HEIs in Tanzania. The study was criticized with its low internal and external
validity results. The researchers recommended a holistic model to be developed based on views
of other e-learning stakeholders.

Yakubu and Dasuki (2018) investigated factors that determine students‘ adoption of eLearning in
a Nigerian University based on the updated DeLone and McLean model (2003). The study
attempted to modify the D&M model in order to determine factors that affect the acceptance of
an e-learning system called ―Canvas‖ by students of a Nigerian University. The study was built
on the assertion that system quality, service quality and information quality are determinants of
behavioral intention to use Canvas and user satisfaction of Canvas, both of which in turn
influence the actual usage of Canvas. Responses from 366 students were analyzed with AMOS
22 using structural equation model (SEM) to test the relationships between the constructs of the
study‘s model. The results from this study have shown that out of the nine proposed
relationships; only 5 were supported. The study is limited to study attitudes of students towards
accepting e-learning systems taking a Nigerian university which adopts an American curriculum.
Furthermore, the study only considered students as its sample populations without any
considerations of instructors and other key e-learning stakeholders. Net benefits construct of the
DeLone & McLean model (2003) was not also considered under the study which makes the
proposed model to be partially adopting the full version of the D&M model (2003). As a
consequence, the authors recommended other studies to fill the limitation of their model.

44| P a g e
2.8. The Study’s Seed Constructs
As it has been discussed in the preceding sections, there is lack of consensus towards what
determines information systems success in general and e-learning systems in particular which is
mainly due to contextual factors. However, the information Systems Success model developed
by DeLone and McLean (2003) remains one of the most robust models to measure information
systems success as it organized success metrics (i.e. taxonomy of IS success) under one model
(Dwevidi et al., 2015; Urback & Muller, 2012). The model is still criticized for some of its
limitations. One of the limitations of the model is for its being context dependent. The model
exhibits different results depending on the objective of the study and the organizational contexts
in which the research is conducted (DeLone & McLean, 2003). To this end, lack of reliability of
the model across various settings was consistently reported by several number of studies
(Yakubu and Dasuki, 2018; Dwivedi et al., 2015; Dorbat, 2014; Wu & Zhang, 2014; Halawi et
al., 2007; Kositanurit et al., 2006; Iivari, 2005). Consequently, the authors of the updated ISS
model call for further research for the model to be continuously validated as the limit of the
application of the model and its constructs in variety of contexts is not yet well known (Urbach
& Muller, 2012; Petter et al., 2008; DeLone & McLean, 2003). As a conclusive remark, DeLone
& McLean (2003) emphasized ―context should dictate the appropriate specification and
application of the D&M IS Success Model‖ p.18.

Although the ISS model exhibits different results across different contexts, Yakubu & Dasuki
(2018) and Aparicio et al. (2016) emphasized that it is still advisable to use the constructs of the
D&M model (2003) in order to assist the identification of e-learning systems success factors and
come-up with an extended or a new model. Building on such studies, this study used the
constructs of the updated ISS model (2003) as the study‘s seed constructs due to the fact that the
model organized success metrics to serve as ‗scaffolding‘ for model building process. This
technique provides better grounding of construct measures for this study‘s model building
process (Eisenhardt, 1989). In the following sub-sections, the seed constructs of the study are
further discussed in relation to empirical IS studies in general and e-learning systems, in
particular.

45| P a g e
2.8.1. System Quality
In the context of this study, the system quality measures the desired characteristics of e-learning
system. Several IS studies further developed different related measures for this construct in
response to a call by DeLone & McLean to validate the constructs of the ISS model.
Accordingly, Table 2 summarizes IS studies validating this construct reporting that there are no
agreed metrics across different settings.
Table 2: IS Empirical Studies Validating the Systems Quality Construct
Measures of systems quality IS Study type Reference
Ease of use, ease of access, user friendly, e-learning Wang et al. (2007)
personalization, system speed
Interactivity, dependability, adaptability, Knowledge
Wu & Wang (2006)
response time Management System
Attractive design, ease of use, interactivity,
e-learning Wrzesien & Raya (2010)
system speed
Interactivity, ease of access, user friendly,
security, flexibility, usability, ease of e-learning Ozkan & Koseler (2009)
maintenance
Interactivity, personalization, ease of access e-learning Oztekin et al. (2010)
Maintenance, ease of use, ease of access,
e-learning Shee & Wang (2008)
flexibility, interactivity
Design, flexibility, response time, aesthetics
e-learning Ho & Dzeng (2010)

Easy-to-use, user friendly, stable, secure, Holsapple and Lee-Post


fast, responsive e-learning (2006), Lee-Post (2009)

Ease to learn, system accuracy, e-learning Alsabawy, (2011); Sedera


sophistication, et. al. (2004)

2.8.2. Information Quality


In the context of this study, information quality measures the desired characteristics of the output
of the e-learning system. A number of IS studies validated this construct across different settings
and came up with several numbers of metrics reporting different measures of the construct as
presented in Table 3 below.

46| P a g e
Table 3: IS Empirical Studies Validating the Information Quality Construct
Measures of information quality IS Study type Reference
Mandatory information & content e-learning Hassanzadeh et al. (2012); Wang &
Liao, (2008); Wang et al. (2007)
Timely information & content e-learning Wang et al. (2007)
Complete information & content e-learning Ozkan & Koseler (2009); Wang et
al. (2007),
Ease of understanding, relevance, useful e-learning Oztekin et al.(2010); Wang et al.
(2007); Hassanzadeh et al. (2012)
Well-structured course content and e-learning Shee et al. (2008)
information
Useful content & information, relevant e-learning Wrzesien & Raya (2010)
information, an up-to-date content &
information
Well organized, effectively presented, of the e-learning Holsapple and Lee-Post (2006), Lee-
right length, clearly written, useful, up-to- Post (2009), Adeyinka and Mutula
date (2010), Hassanzadeh et al. (2012)
Availability, usability, understandability, e-learning Alsabawy (2011); Sedera et. al.
relevance, format, conciseness, security (2004)

2.8.3. Service Quality

According to DeLone & McLean (2003) and Pitt et al. (1995), Information quality and System
quality constructs deal with the products of the IS functions than its services. Poor user support
may result in losing customers and its subsequent lost of sales. To this end, they recommend that
IS researchers need to include the Service quality construct in their IS success measures to make
the model more complete. Other similar studies have also emphasized the significance of this
construct as a component of e-learning systems success (Lwoga, 2014; Eom, 2011; Freeze et al.,
2010). This construct is conceptualized in this study as the degree of technical support of e-
learning system provided to the system users (i.e. students and instructors).

Several studies have used the 22-item SERVQUAL measurement items from Marketing to
measure the Service quality construct in the information systems context (DeLone and McLean,
2003). The SERVQUAL instrument comprises the followings: tangibles, assurance, empathy,
responsiveness and reliability constructs in order to measure the Service Quality construct
(Zeithmal et al., 1990). In an empirical study by Jiang et al. (2001), they have found out that
SERVQUAL measure is an important measurement tool for Information system managers.

47| P a g e
Furthermore, DeLone and McLean (2004) have identified responsiveness, effectiveness and
availability of technical support personnel as important measures of the Service quality
construct.

Although SERVQUAL scales are valuable measurement items for the Service quality construct
of ISS model, further work is needed to validate the items of the construct in order to evaluate
the functions of IS services (DeLone & McLean, 2003; Van et al., 1997).

2.8.4. User Satisfaction

According to Petter & McLean (2009), this construct measures ―the effectiveness success of the
IS‖ (DeLone &McLean, 2003). Shannon and Weaver (2009) defined ―the effectiveness success
of IS‖ as the effect of the product of the IS on the user. The User satisfaction construct is
measured in terms of the user‘s satisfaction in relation to the system performance (Petter &
McLean, 2009; DeLone & McLean, 2003). IS studies, in general, and e-learning systems in
particular, validated this construct and come-up with several numbers of metrics to measure the
User satisfaction construct implying that the construct exhibits different metrics across different
settings (see Table 4).
Table 4: IS Empirical Studies Validating the User Satisfaction Construct
Measures of User satisfaction IS Study type Reference
e-commerce and Molla & Licker (2001)
Attain the user‘s trust e-learning Hutchins & Hutchison (2008)

Winning the users‘ needs e-learning Lee et al.(2009), Sheng et al.


(2008)
Sustaining high level of users‘ e-learning Lee (2010), Holsapple & Lee-
satisfaction; enjoyable experience Post (2006)
Overall satisfaction as derived from e-learning Wu et al. (2010); Freeze et al.
the system performance (2010)

2.8.5. System Use

This construct was described by Petter & McLean (2009) and DeLone and McLean (2003) as the
consumption of the products or services of an IS or the frequency of use of IS which measures
from a visit to a Web site, to navigation within the site to informational retrieval, to execution of
a transaction. This construct measures the effectiveness success of the IS. It is conceptualized in
this study as the frequency of use of an e-learning system from visiting the LMS to navigating

48| P a g e
across all the features of the site to fully execute the teaching and learning process via the LMS
(Apparicio et al., 2016b; Hassanzadeh et al., 2012; Lee-Post, 2009; Holsapple and Lee-Post,
2006).

Several studies attempted to validate the Systems Use construct. In effect, Frequency of use of
the system is used as measure of this construct by several numbers of e-learning studies (Eom,
2011; Freeze et al., 2010; Wang & Liao, 2008; Wang et al., 2007). DeLone & McLean (2003)
described that depending on the nature, extent, quality, and appropriateness of an IS, the use
construct plays significant role in the success of IS in that more use will produce more system
benefits and declining usage may signify that the expected system benefits are not being realized.

2.8.6. Net benefits

According to Petter & McLean (2009), this construct is more specifically intended to represent
―the effect an IS has on an individual, group, organization, industry, society, etc., which is often
measured in terms of organizational performance, perceived usefulness, and effect on work
practices‖ p.161. DeLone & McLean (2003) stated that as the impacts of IS ranges from
immediate user to researchers, and from work group impacts to national economic accounts
which could be impacted by the IS activity, the type of impacts to be measured depends on the
purpose and type of system to be evaluated. Accordingly, they have recommended that rather
than complicating the model with different forms of impact measures, it is advisable to
categorize all the impact measures under this construct. Seddon (1997) have coined words such
as ―consequences‖ and ―net benefits‖ to describe system outcomes. DeLone & McLean (2003)
used the term ―Net benefit‖ because the original term ―impacts‖ in their original model (DeLone
& McLean, 1992) may connote positive or negative. They further explained that the inclusion of
―net‖ in the ―net benefit‖ is to fill this void that no outcome is fully positive, without any
negative. They indicated that if the positive benefits outweigh the negative consequences of the
system, the net benefit construct will be positive and vice-versa. Therefore, the ―net benefit‖
construct replaced all ―impact‖ constructs that were available in their 1992 D&M ISS Model.
Several numbers of e-learning system studies were conducted to validate this construct and
come-up with different metrics of the construct as summarized in Table 5 below:

49| P a g e
Table 5: IS Empirical Studies Validating the Net Benefits Construct
Measures of Net benefit IS Study type Reference
Improved performance, academic success, e-learning Lwoga (2014), Eom (2011), Parker &
time savings Martin (2010), Wang et al. (2007),
Holsapple & Lee-Post (2006)
Gaining new knowledge e-learning Noesgaard & Orngreen (2015),
Ssemugabi (2006)
Empowered e-learning Chiu & Wang (2008), Holsapple &
Lee-Post (2006), Ssemugabi (2006),
Piccoli et al. (2001)
Cost reduction e-learning Parker & Martin (2010), Ho & Dzeng
(2010), Wang et al. (2007)

The net benefit construct is regarded as having weak interrelationships with the constructs in the
DeLone and McLean (2004) model. They emphasize that further study is needed in order to
investigate and incorporate net benefit measures in to the model and thus, validate this constructs
along with other constructs of the model (Urbach and Muller, 2012).

Overall; studies emphasized that the mixed results obtained across various IS empirical studies
imply that the D&M model can assist in identifying success variables across different settings
such as for developing countries (Yakubu & Dasuki, 2018; Jagannathan et al. , 2018; Iivari,
2005). To this end, the study took the constructs of the updated ISS model as its seed constructs
while developing its model due to the following main reasons:

 Although the model was developed taking in to account the developed nations context, it
showed mixed support in developing countries context as well showing that the model
fairly stood up well to time and scrutiny across different settings (Tossy et al., 2017;
Lwoga, 2014; Seddon & Kiew, 1994; Rai et al., 2002; Iivari, 2005; Jagannathan et al.,
2018).
 Most cited article in the top IS journals during the 20-year period of 1992–2012 (Petter
et al.,2013; Dwivedi et al.,2015; Aparicio et al.,2016)
 Being a taxonomy of Information Systems Success constructs (Urbach & Müller, 2012;
Bindhu & Manohar,2015)

50| P a g e
 Literature reviews support these success constructs to serve as seed constructs to build a
new model (Holsapple & Lee-Post,2006; Eom, 2011; Noesgaard & Orngreen, 2015;
Yakubu et al., 2018).

Although much of the literature supports that the ISS model has showed fair support across
different contexts (i.e. both in developed and developing countries context), it was criticized that
the model did not account the users perspective (Dorbat, 2014). To fill this gap, our study also
considers the learning theories and computer/IT mediated educational models (concepts)
discussed under section 2.3 and 2.5, respectively, while developing the study model.

2.9. Chapter Summary

This chapter has presented results of literature reviews related to this study. The main emphasis
of this chapter is on concepts, theories, measures and models related to the success of e-learning
systems. First, a general overview about e-learning was presented from relevant literature. It has
been discussed that different cohorts define e-learning differently. Accordingly, operational
definitions of e-learning in general and e-learning systems in particular were provided. Secondly;
although the application of e-learning systems in Ethiopia is at its infant stage, relevant literature
on the challenges and current status of e-learning systems were reviewed. The last sections of the
chapter were allocated to the discussion of theories related to information systems success in
general, and e-learning systems success, in particular. To this end, the literature review shows
that the issue of information systems success has received significant attention by several IS
researchers. However, there is no consensus towards what accounts for IS success. DeLone and
McLean synthesized research made between the periods 1970s and 1980s related to measures of
information systems success and come up with a taxonomy of IS success model in 1992 and later
an updated ISS model was released in 2003. Although the ISS model is one of the most
commonly used models to measure information systems success, it is still subject to several
limitations. Several researchers criticized the model for being context dependent on user types,
technology, culture, and organizational contexts. The authors of the ISS model also
recommended that the model and its constructs should be challenged across different settings. In
response to this research gap in information systems success research domain and the authors
call for further research to validate the model, several ISS studies come-up with different results
across different settings or contexts. E-learning systems, as a special form of IS, has also

51| P a g e
received the attention of several researchers and practitioners in higher education institutions.
However, the issue of what determines the success of e-learning systems is still a debatable
research domain. To this end, several IS researchers adopted the ISS model to develop success
model for e-learning systems and they found different results. Furthermore, e-learning systems
success models which were developed based on the ISS model were not holistic due to the fact
that most of the studies conducted were only limited to a particular context and also focusing on
limited types of e-learning stakeholders. Furthermore, different cohorts define e-learning
differently. These issues made evaluation of e-learning systems success more complicated.
Accordingly; this study reviewed relevant IS models, in general and e-learning systems success
models, in particular in order to develop e-learning systems success model based on views of
various groups of e-learning stakeholders with the context of EHEIs. To this end, the constructs
of the updated ISS model were used as seed constructs or scaffolds to develop the study‘s model.
The next chapter presents the overall study‘s research methodology and model building
technique.

52| P a g e
Chapter Three

Research Methodology

3.1 Introduction

The purpose of this study is to explore e-learning systems success factors and develop a holistic
model that may help Ethiopian higher education institutions evaluate their e-learning systems
success. To achieve this objective, both qualitative and quantitative research methods were
employed. A qualitative case study research method using multiple data sources and
triangulation is used to identify e-learning system success factors and develop a model. On the
other hand, the quantitative research method by use of Structural Equation Modeling (SEM-PLS)
is used to test the model.

This chapter deals with the selection of appropriate research methods and model exploration and
extension techniques used to answer the research questions.

3.2 Research Design

Research design is the overall plan or blue print that creates a logical structure of the empirical
data in order to answer the research questions (Bhattacherjee, 2012; Muijs, 2004). Similarly, Yin
(2003) defines a research design as ‗a logical plan for getting from here to there, where here may
be defined as the initial set of research questions to be answered, and there is some set of
conclusions (answers) about these questions. Between ―here‖ and ―there‖ may be found a
number of major steps including the collection and analysis of relevant data. The Study‘s
research design is depicted in figure 15 to serve as a general research framework in line to
meeting the research objective of the study.

The foregoing sub-sections describe each of the elements of the research design in light of
answering the research questions.

53| P a g e
Positivism Philosophical
Position
Deductive
Research
Approaches
Survey
Research
Cross-Sectional Strategies
Data Mixed Case study
collection and
methods Research
data analysis
Choice

Time horizon

Inductive Data Collection


Techniques/
Procedures

Fig. 15 The Study‘s Research Design (adapted from Saunders et al., 2009)

3.2.1 Philosophical Position

Research philosophy or research paradigm refers to assumptions of the researcher about the
development of knowledge upon which the research is to be conducted (Myers, 2013; Shanks,
2002). These assumptions determine the research strategy and the methods you choose to
achieve the research objective. According to Saunders et al. (2009), there are four types of
research paradigms: Positivism, Realism, Interpretivism and Pragmatism. These four research
paradigms along with their philosophical underlying assumptions (Ontology, Epistemology and
Axiology) are briefly discussed in Table 6.

54| P a g e
Table 6: Research Philosophies (Bhattacherjee, 2012; Saunders et al., 2009)
Positivism Realism Interpretivism Pragmatism

Knowledge exists independent of


Otology: Concerned human beliefs (direct realist) or The nature of reality exists in a Multiple/flexible views in
The researcher is independent
with what is being & knowledge is interpreted through socially constructed thoughts or line to the research
of the research process.
nature of reality. social conditioning (critical beliefs. question.
realist).

What is acceptable
knowledge depends either
or on both observable
Epistemology: It is What we see or sense is what is
data or subjective/socially
concerned with the credible being fact (direct realist) What is acceptable knowledge is
Focuses on a law like constructed meanings
methodological way of or describing within the research subject to socially constructed
generalization. depending upon the
knowledge or what is context upon the observable data meanings.
nature of the research
acceptable knowledge. (critical realist).
question. It is more
concerned with what is
practical.

Research being value laden, the Whether objective or


Research is not value laden. It is
Axiology: Concerned researcher is subject to several subjective stance, values
The research is conducted in value bound. The researcher
with study of values in biases that may lead him/her to have great importance
a value free way. cannot be detached from what is
research. negatively impact the research while interpreting
being studied.
finding. findings.

In-depth study of the research. It


Data collection It has a structured research The type of data collection employees small samples usually
It usually uses mixed
Procedures usually approach while collecting methods depend on the type of qualitative to provide rich
methods.
used. data. research. description of the phenomena
studied.

55| P a g e
Among others, Positivism and Interpretive are the two popular research paradigms in
information systems research (Chen & Hirschheim, 2004). Positivist researchers are generally
guided by constructs, hypothesis or a model, mainly focusing on causality and law like
generalizations or to develop or test a model (Hevner et al., 2004). Employing Eisenhardt‘s
(1989) theory development process, this study attempts to develop an e-learning systems success
model based on empirical data assisted with seed constructs from extant literature. Accordingly,
the study fits with the Positivist research paradigm (Eisenhardt & Graebner, 2007; Eisenhartd,
1989).

3.2.2 Research Approach

The research paradigm of a research informs whether the research approach or logic is deductive
or inductive. According to Bhattacherjee (2012) and Neuman (2005), in a deductive approach, a
researcher is guided by a pre-defined model and further develops and tests theory and
hypothesis. On the other hand, inductive approach aims to develop a theory from empirical data.
This study aims to develop and test a model as assisted by seed constructs extracted from the
updated ISS model. This specific part of the research process (moving from prior-constructs to
data) involves deductive approach. On the other hand, the researcher identified new constructs
from the study‘s qualitative empirical data which also involves inductive approach. To this end,
this study combines both deductive and inductive research approaches. Although there are rigid
distinctions between deduction and induction, combining these research approaches within a
study enhances the quality of the research process (Imenda, 2014).

3.2.3 Research Strategies

Research strategy is a guide to collecting data or investigating the research systematically in such
a way that the research question and research objective of the study are answered (Yin, 2011).
The most common types of research strategies are Experiment, Grounded theory, Ethnography,
Archival research, Action research, Case study and Surveys (Saunders et al., 2009). No research
strategy is inherently one better than the other. The choice of these research strategies depends
on the type of research, research questions and objective of the study (Andrade, 2009; Yin,
2003). Accordingly, the following sub-sections evaluates each of these research strategies in
light of the study‘s objective.

56| P a g e
3.2.3.1 Experiment

Experiment involves measuring the effects of one variable on another variable by introducing
planned interventions and control of all other variables (Cooper & Schindler, 2006). Experiments
are usually guided by research questions to answer ―Why?‖ questions (Neuman, 2005). Since
this study neither introduces planned interventions nor involves control variables, it doesn‘t
select Experiment as means of its research strategy.

3.2.3.2 Grounded Theory

Grounded theory starts data collection without any prior constructs, hypothesis or initial
theoretical framework that is in situations where there are no studies/theories/models for a given
research domain to assist theory development process (Corbin & Strauss 2007; Suddaby, 2006).
Theory is grounded to finest iterations of data (Collis & Hussey, 2003; Charmaz, 1996). Since
this study started data collection process assisted by seed constructs from extant literature, this
type of research strategy could not be appropriate for this type of study.

3.2.3.3 Ethnography

Ethnography is another type of research strategy focusing on studying culture or the social world
(Zikmund, 2000). Before and after developing the study‘s proposed model, the study didn‘t
come-up with any constructs related to culture as factor determining e-learning systems success.
Accordingly, this type of research strategy was not appropriate for this type of study.

3.2.3.4. Archival Research

Archival Research as a research strategy focuses on use of administrative records and documents
as a prime source of data (Yin, 2011). Hakim (2000) argues that Archival research strategy is
best suited for secondary data analysis than being used as a major guide to collect data on
contemporary issues. This study primarily used qualitative and quantitative data collection
methods in order to develop an e-learning systems success model. Accordingly, this type of
research strategy is not selected as an appropriate research method for this study.

57| P a g e
3.2.3.5. Action Research

Action research as another type of research strategy differs from other research strategies in that
it promotes change within the organization under study focusing on action (Robson, 2002). This
type of research strategy is best suited for ―how?‖ type of research questions (Saunders et al.,
2009). Action research is eliminated as choice of research strategy for this study.

3.2.3.6. Case Study

Case study is a type of research strategy which is most suitable method if the study aims to gain
deep understanding of the context of the research (Morris, 2003; Morris and Wood, 1991). Yin
(2003) also emphasis the paramount importance of depth understanding about research context
due to the fact that within a case study there is no clear boundary between the phenomenon under
investigation and its context. Case study is appropriate for this study due to the fact that it is most
often used in exploratory (i.e. where existing knowledge is limited) and explanatory research
with the aim of theory development ( i.e. to provide evidence for hypothesis generation) or
testing a theory (Eisenhardt & Graebner, 2007; Yin, 2003; Shanks, 2002; Eisenhardt 1989); it
triangulates multiple sources of data such as interviews, observation, documentary analysis and
questionnaires (Yin, 2003); it investigates a given context in greater depth and combines both
qualitative and quantitative research methods within a study to enhance the quality of the
research process (Andrade, 2009; Creswell, 2003). Since the objective of this study involves in-
depth understanding to identify e-learning systems success factors in EHEIs institutions context
and latter develop an appropriate model, case study was selected as an appropriate research
strategy of the study. Wagner et al. (2008) also recommends the suitability of case study research
for this type of study. A brief discussion about using a case study towards meeting the study‘s
objective is also presented in Section 3.3.

3.2.3.7. Survey

Survey, as another type of research strategy, is primarily used in order to generalize a given
population based on a representative of the whole population (Bhattacherjee, 2012). Moreover,
the data collected using this type of research strategy can be analyzed using descriptive and
inferential statistics in order to explain cause and effect relationships (Williamson, 2000). The
data to be collected through survey strategy is unlikely to cover wide-range of data as compared

58| P a g e
to other types of research strategies (Creswell, 2003). This type of research strategy is most
appropriate to answer ‗who‘, ―why‖, ‗what‘; ‗how much‘, and ‗how many‘ questions (Yin,
2003). According to Yin (2003), Saunders et al. (2009), Gable (1994), it is most common to use
Survey as part of a Case study. To this end, this study used Case study research strategy to
develop the study model and Survey research strategy to test the causal relationships of the
constructs of the study model and its hypothesis.

3.2.4 Research Choices

Prior to data collection and analysis, a researcher need to choose among qualitative, quantitative,
and mixed research methods. Qualitative research method is a research method that relies on
qualitative measures that uses non-numerical data (Yin, 2011; Strauss & Corbin, 2008). On the
other hand, Quantitative research method relies on quantitative measures that use numerical data
(Strangor, 2011). Qualitative research method is most recommended in order to have in-depth
understanding about specific phenomena, generate rich description, to develop a theory and trade
detail for generalization (Imenda, 2014; Yin, 2009). Quantitative research method is most
suitable to test a theory and generalize findings (Muijs, 2004; Yin, 2003). To this end, there is
great value in mixing the two research methods (Myers, 2013; Muijs, 2004).

In this study; while identifying e-learning systems success factors and develop the proposed
model, qualitative research method adopting multiple data collection methods was employed. On
the other hand; in order to test the model and its constructs, quantitative research method was
employed.

3.2.5 Time Horizons

With respect to a time horizon, a research can be cross-sectional or longitudinal. Cross-sectional


studies deals with the study of a particular phenomena at a particular time while longitudinal
studies deals with the study of a specific phenomenon or person over extended time period in
which case respondents attrition rate is usually high requiring a well-structured setting (Sekaran,
2006).

This study adopted a cross-sectional study due to the fact that the different types of data
collection methods used to achieve the research objectives of the study being conducted at a

59| P a g e
particular time was both feasible and suitable to address the research questions. The first phase of
the study was conducted through qualitative research method. It was conducted between June
2016 – January 2018 employing multiple data collection sources: focus group discussions (i.e.
students‘ focus groups, instructors‘ focus groups and e-learning experts‘ focus groups); in-depth
personal interviews with ICT directors; observation; and document reviews/ archival evidences
in purposefully selected nine Ethiopian higher education institutions in order to deeply
understand and identify factors that determine e-learning systems success and develop a model.
The quantitative data collection process (i.e. use of questionnaire as data collection instrument
from students and instructors samples data) which is the second phase of the study employed to
test the model was undertaken between February 2018-April 2018.

3.3.Model Exploration and Extension Methods

Qualitative researchers develop a model based on either empirical data or from existing theory
(Yin, 2011). In line with the objective of this study, Pan & Tan (2011), Baxter & Jack (2008),
Walsham (2006), Shanks (2002), Eisenhardt (1989), Lee et al. (1999), and Darke et al. (1998)
provided a way forward towards building theory from extant literature using case study research
by emphasizing the following benefits:

1. Prior theories provide the researcher with the capability to decide with regard to
identifying who will be involved in the research.
2. To categorize relevant constructs as ‗scaffolds‘ prior to entering into the field with strong
grounding in literature.
3. To provide the researcher with the capability to easily understand the meanings of every
bits of data being collected.

The study considered several model building techniques in a way that would answer the research
questions. Accordingly, model building techniques proposed by Shanks (2002), Eisenhardt
(1989), Van Maanenen (1988), Strauss (1987), Kirk & Miller (1986), Miles & Huberman (1984),
Miles (1979), and Glaser & Strauss (1967) were taken in to account as the study‘s potential
candidates in order to adopt for its model development process to be in line to meeting the
research objective of the study. However; except Eisenhardt (1989), the other techniques didn‘t
provide a road map for theory building from cases other than focusing on qualitative data

60| P a g e
analysis and construct building techniques as a way forward to theory building process.
Furthermore; unlike other theory development techniques, Eisenhardt (1989) theory building
technique makes an ideal candidate to this type of study due to the following main reasons:

o It is most suited when there is limited knowledge about a phenomenon, or there exists
conflict among existing studies. With respect to this study, it cannot be claimed that there
are no studies in e-learning systems success. There are ample studies but the findings are
conflicting due to the contextual dependent features of e-learning systems success which
makes this theory development technique appropriate to this type of study.
o Most suitable in the early stages of research work or to afford freshness to an already
researched topic. This theory building process involves specifications of seed constructs
(i.e. whenever possible); population specifications such as thresholds for selecting
number of cases; flexible instrumentation for type of studies where accessing adequate
data might be difficult; cross-case analysis tactics; and substantiating findings with
literature. To this end, the application of e-learning systems in EHEIs is at its infant stage
which makes accessing adequate and relevant data difficult requiring flexible data
collection instrumentation methods & cross-case analysis tactic.
o It is most appropriate to develop testable hypotheses and theory which can be
generalizable across settings. This specific feature of the theory development technique
would make an ideal candidate for this type of study as the basic research objective of the
study is to identify determinants of e-learning systems success and place them in a model
that can be generalized to EHEIs and other HEIs exhibiting similar features.
In contrast, other theory development techniques such as ground theory provided by Van
Maanen (1988), Strauss (1987), and Glaser & Strauss (1967) are more concerned with a rich,
complex description of the specific cases which appear less concerned with the development of
generalizable theory.

Although the study predominantly adopted Eisenhardt‘s theory development process, it also
adopted relevant concepts such as theoretical sampling, pattern matching, case study design,
replication logic and a framework to judge the quality of case study research from Yin (2011;
2009; 2003) and theoretical saturation from Glaser and Strauss (1967); case study qualitative

61| P a g e
data analysis techniques from Yin (2011; 2009); and open coding process and qualitative data
analysis techniques and report from Miles & Huberman (1994;1984).

Accordingly; Table 7 provides a general framework of the study‘s model development process
as mainly adopted from Eisenhardt (1989) and the subsequent sub-sections detail each steps of
the model development processes.

Table 7: Framework for the Study‘s Model Building Process (adapted from Eisenhardt, 1989)

Step Activity Reason

Definition of research question Focuses efforts


Possibly priori constructs Provides better grounding of construct measures and provide a
Getting Started researcher the capability to understand each bits of data being
collected
Specified population Constrains extraneous variation and sharpens external validity.
Theoretical, not random, sampling Focuses efforts on theoretically useful cases i.e., those that
Selecting Cases replicate or extend theory by filling conceptual categories.

Crafting Multiple data collection methods Strengthens grounding of theory by triangulation of evidence
Instruments and
Protocols
Overlap data collection and analysis Speeds analyses and reveals helpful adjustments to data
Entering the including field notes collection.
Field Flexible and opportunistic data collection Allow investigators to take advantage of emergent themes and
methods unique case features.
Within case analysis Gains familiarity with data and preliminary theory generation
Cross case pattern search using divergent Forces investigators to look beyond initial impressions and see
techniques evidence through multiple lenses
Analyzing Data

Iterative tabulation of evidence for each Sharpens construct definition, validity, and measurability
construct.
Shaping Replication, not sampling, logic across Confirms, extends, sharpens construct theory
Hypotheses cases
Search evidence for ―why‖ behind Builds internal validity
relationships
Comparison with conflicting literature Builds internal validity, raises theoretical level, and sharpens
construct definitions
Enfolding
Literature Comparison with similar literature Sharpens generalizability, improves construct definition, and
raises theoretical level
Reaching
Theoretical Saturation when possible Ends process when marginal improvement becomes small
Closure
Validation of measurement and structural
Further proofs the predictive power of the model and its
Model testing model components of the model
parsimonious feature.
employing Structural Equation Modeling

62| P a g e
3.3.1 Getting Started

Eisenhardt (1989) recommends that the theory building process need to start with initial
definition and explanation of the research questions and problems of the study guided by
specifying important variables or prior-constructs from extant literature. Accordingly, this study
identified six constructs as its seed constructs from the updated ISS model (DeLone & McLean,
2003).

3.3.2 Selecting Cases

A case can be ―decision‖, ―individuals‖, ―organizations‖, ―processes‖, ―programs‖,


―neighborhoods‖, ―institutions‖ and even ―events‖ studied at a specific point of time or
longitudinally over extended period of time (Yin, 2003). In case study research, there is no
specific standard/ agreed number of cases or threshold values unlike survey methods (Yin, 2003;
Darke et al., 1998). However, Eisnehardt (1989) recommends selection of 4 to 10 cases for
building theory. When cases are less than four, it might be difficult to develop a theory unless the
case has many embedded mini-cases. On the other hand, selecting more than 10 cases might
result in getting voluminous data that may result in high degree of complexity to manage the data
while developing a clear-cut theory. Furthermore, the basis for the selection of the number of
cases and the research participants within the case study is based on theoretical sampling not
random sampling. Sampling of cases from chosen population in case studies is different as in
survey type of studies. Sample cases are drawn from the chosen population for theoretical
generalization (i.e. cases/samples are purposefully selected to confirm or disconfirm a given
theoretical framework or propositions) not statistical generalization (Eisenhardt & Graebner,
2007; Suddaby, 2006; Yin, 2003). Eisenhardt (1989) described it as: ―while the cases may be
chosen randomly, random selection is neither necessary nor even preferable‖ p. 537. Theoretical
sampling is defined as purposeful selection of cases in which the researcher identified the
cases/participants based on rich information they have in relation to the objective of the study
(Suddaby, 2006; Yin, 2003). Moreover, the principle of theoretical saturation is also adopted
while defining the number of cases (Yin, 2003; Morgan, 1997; Glaser & Strauss, 1967). The
researcher has taken these pieces of advices in selecting appropriate number of cases and
research participants in relation to the objective of the study.

63| P a g e
Accordingly, the researcher purposefully selected 8 Public universities and one private college
adopting e-learning systems to support the teaching and learning process. These higher education
institutions are selected in line with the objective of the study and adopting the theory of
saturation. The basic inclusion criteria for selection of the universities are based on the study‘s
operational definition of e-learning system (discussed in chapter 2); the selected universities
must have Learning Management System (LMS) to apply e-learning system in the teaching and
learning process; and availability of operational e-learning unit which is responsible for the
institutional wide e-learning implementation. These criteria consider Khan‘s (2005) advice for
evaluation of e-learning implementation. Table 8 presents lists of selected case universities and
the type of Learning Management System (LMS) they use. The following sub-section further
briefs about the case institutions.

Table 8: Selected Cases / Ethiopian Higher Education Institutions (EHEIs) Implementing E-learning
Systems

No. Name of higher education Type of ownership Type of Learning


institution Management
Systems (LMS) used
1. Adama Science and Technology Public (under Ministry of Science Moodle
University (ASTU) and Technology University)
2. Addis Ababa University (AAU) Public (Under Ministry of Moodle
Education)
3. Arba Minch University (AMU) Public (Under Ministry of Moodle
Education)
4. Bahir Dar University (BDU) Public (Under Ministry of Moodle
Education)
5. Ethiopian Civil Service Public (Under Ministry of Public Moodle
University (ECSU) Civil Service Servants)
6. Haramaya University (HRU) Public (Under Ministry of Moodle
Education)
7. Jimma University (JU) Public (Under Ministry of Moodle
Education)
8 Mekelle University(MU) Public (Under Ministry of Moodle
Education)
9. PESC Information Systems Private Moodle
College (PESC)

64| P a g e
3.3.2.1 Addis Ababa University (AAU)

Addis Ababa University (AAU) was established in 1950 and is the oldest and largest University in
Ethiopia. Beginning with enrollment capacity of 33 students in 1950, AAU has an enrollment
capacity of 48,673 students (33,940 undergraduate, 13,000 Master‘s and 1733 PhD students) and
6043 staff (2,408 academics and 3,635 support staff). In its 14 campuses, the University runs 70
undergraduate and 293 graduate programs (72 PhD and 221 Masters), and various specializations in
Health Sciences. Although not in a mandatory basis, the University uses Moodle as its Learning
Management Systems (LMS). The University has availed e-learning facilities such as computer labs
and networking its different campuses. Consequently, it attempted to make available relevant e-
learning technologies and facilities by assigning IT and e-learning experts to support the teaching and
learning process (http://www.aait.edu.et/ict-service/e-learning-facility and information obtained from
ICT Director of the University in September 2017).

3.3.2.2 Adama Science and Technology University (ASTU)

Adama Science and Technology University (ASTU) was first established in 1993, in Adama City
which is 95 kms away from the capital city, as Nazareth Technical College (NTC). Following its
renaming by the Council of Ministers as Adama Science and Technology University in May 2011,
the University is striving to become a center of excellence in technology. The faculties at the main
campus include: School of Business, School of Engineering and Information Technologies, School of
Humanities and Law, School of Natural Sciences, and School of Educational Science and
Technology Teachers Education. On the other hand, the two schools in Asella are the School of
Agriculture and School of Health and Hospital.

The ASTU establishes e-learning center by availing e-learning infrastructure such as dedicated e-
learning computer labs, e-studio to record video and audios lectures, developing e-learning contents
to 72 courses, hiring permanent e-learning experts to oversee the implementation of e-learning across
all the schools of the university in order to support the teaching and learning process through e-
learning. Furthermore, being one of the pioneers in introducing e-learning to Ethiopia also took the
lead to provide a road map to other Ethiopian universities such as Hawassa University, Mekelle
University and other universities which were in their planning phase of applying e-learning in their
teaching and learning process (www.astu.edu.et/ ; Ketema and Nirma, 2015; and information
obtained from manager of the e-learning center of the University in November 2017).

65| P a g e
3.3.2.3 Arba Minch University (AMU)

Arba Minch University, situated at the foot of Gamo Gofa Mountain ranges facing huge Abaya
and Chamo Lakes in the city of Arbaminch, was established in September 1986 with an
enrollment capacity of 181 students in two degree, two advanced diploma and two diploma
programs. Currently, the University consists of water technology institute and six colleges
situated across University‘s sprawling six campuses i.e. Main Campus, Abaya, Chamo, Kulfo,
Nech Sar and Sawla. It has a total enrollment capacity of 32,977 students (i.e 30767 are
undergraduates, 2186 are Masters degree students and 24 PhD students) and a total of 4867 staff
(i.e. 1270 are Local faculty; 3427 are Admin staff and 170 are Expat Staff). The University
attempted to support all of its teaching and learning process via e-learning system. A responsible
e-learning office with e-learning staff is in place under the ICT directorate office of the
University. The University uses Moodle as its Learning Management system (www.amu.edu.et
and information obtained from the ICT director of the University in August 2017).

3.3.2.4 Bahir Dar University (BDU)

It was established on May 6, 2000 by merging two former higher education institutions (Bahir
Dar Polytechnic which was established in 1963 under the technical cooperation between the
Government of USSR and the Imperial Government of Ethiopia and Bahir Dar Teachers‘
College which was established in 1972 by the tripartite agreement of the Imperial Government of
Ethiopia, UNESCO and UNDP). The University is now among the largest universities in the
country with an enrollment capacity of 52,830 students in its 219 academic programs; 69
undergraduate, 118 masters, and 32 PhD programs; Bahir Dar University has five colleges, four
institutes, two faculties and one school. The academic units of the University include College of
Science, College of Agriculture and Environmental Sciences, College of Medical and Health
Sciences, College of Business and Economics, College of Education and Behavioral Sciences,
Bahir Dar Institute of Technology, Ethiopia Institute of Textile and Fashion Technology,
Institute of Land Administration, Institute of Disaster Risk Management and Food Security
Studies, Faculty of Humanities, Faculty of Social Sciences, School of Law, Sport academy and
Maritime academy. The research centers of the University are Blue Nile Water Institute,
Biotechnology research Institute, Pedagogy and Education research Institute, Energy research
Institute, Textile, garment and fashion design, Abay culture and development research center,

66| P a g e
Geospatial data and technology center, Institute of Economics research and Demographic
surveillance. The University implements e-learning to support the teaching and learning process
by investing lots of money and in placing e-learning experts/ personnel to run the e-learning
system (see Figure 16 for sample of online exam sessions). Among others, the University
stipulates the duties and responsibilities of the e-learning staff as to customize, configure,
maintain and administrate the LMS (Moodle) software in all the teaching and learning process
, setup and configure the digital library resources and provide training and support for the end
users, configure the LMS servers, upload the curriculum and provide trainings for Instructors and
Students embraced with e-learning programs (http://bdu.edu.et/ict/ and information obtained
from head of the e-learning unit in December 2017).

Fig.16 BDU Students Taking Online Exams via the E-learning System

3.3.2.5 Ethiopian Civil Service University (ECSU)


Ethiopian Civil Service University (ECSU) was established in 1995 to enhance the capacity of
the Ethiopian Civil Service Servants within the context of the country‘s development policies
and strategies. More specifically, the University aims at building the capacity of the civil service
at both federal and regional levels. The University set-up e-learning infrastructure to support the
teaching and learning process (see Fig. 17 for the sample view of the University‘s e-learning
portal). Effective the year 2005, the University launches E-learning in two common modules
namely Public Service Delivery, Ethics and Change Management – Developed and hosted by the
Institute of Public Development and Management Studies and Ethiopian Public Administration
and Governance – Developed and hosted by the Institute of Leadership and Good Governance.
These two common modules were delivered to 16 masters programmes (to more than 500

67| P a g e
students) of the University. These modules cover common subjects such as ethics,
professionalism, corruption, policies, human rights, history of public service, Ethiopian
administration, public management, law, federalism etc., which are taken as essential for all civil
servants striving for a masters degree. By applying e-learning system in the teaching and
learning process, class time was reduced to 50%. The University establishes an e-learning center
assigning e-learning staff to run the e-learning system in its teaching and learning process.
Accordingly, the center is responsible to develop and implement e-learning technologies and also
supports the University while developing e-learning materials in collaboration with all the
concerned academic and administrative staff. It was very important for the ECSU to reduce class
time by adopting e-learning in the teaching and learning process, as the module is delivered to
new students of the 16 masters programmes (www.ecsu.edu.et/article/ecsu-introduces-e-
learning-masters-programmes and information obtained from manager of the e-learning unit in
July 2017).

Fig. 17 ECSU Online Campus View

68| P a g e
3.3.2.6 Haramaya University (HRU)

Haramaya University was established in 2006. The University encompasses nine Colleges,
one Institute and one Directorate namely, College of Agriculture and Environmental Science
(CAES),College of Business and Economics(CBE), College of Computing and Informatics
(CCI), College of Medical and Health Sciences(CMHS), College of Social Sciences
and Humanities(CSSH), College of Law(COL), College of Veterinary Medicine(CVM), College
of Natural and Computational Sciences (CNCS),College of Education and
Behavioral Sciences(CEBS), Directorate of Continuing and Distance Education (CCDE) and
Haramaya Institute of Technology (HIT). Under these divisions, the University offers a total of
224 programs of which 106 are undergraduate programs, 104 are second degree
(M.Sc./M.Ed./MPH) and 14 are PhD level training programs. In addition to academic programs,
the University has launched institutes that cater research and outreach programs. The University
has been actively involved in research activities, primarily in the fields of agriculture, since
its inception. The main campus is situated at Haramaya town located at about 510 km East of
Addis Ababa, between Dire Dawa and Harar towns. The University invested a lot in
implementing e-learning to support the teaching and learning process. There is an e-learning unit
which coordinates all the e-learning functions for its implementation. The University has in place
e-learning infrastructure to make e-learning operational in all the University‘s wide teaching and
learning process. (www.hu.edu.et and information obtained from head of the e-learning unit in
October 2017)

3.3.2.7 Jimma University (JU)

Jimma University, located in Jimma town, was established by merging the Jimma Institute of
Health Science and the Jimma College of Agriculture in the 1980‘s. The University was named
the top ranked public institution in four consecutive years in the nation by the Ethiopian ministry
of education. The University comprises various colleges such as Engineering, Natural Science,
Social Sciences and Humanities, Law and Governance, Business and Economics, Education and
Behavioral Sciences, Health Sciences, Agriculture. A lot of investment has been exerted to
implement e-learning and support the teaching and learning process in all of the colleges. The
Information Communication Technology and E-learning coordinating office under the college of
Education and Behavioral Sciences was established in 2014 to provide support provision in the

69| P a g e
usage of ICT equipment and services to enable outstanding teaching and learning and college
administration, provide some maintenance of technology hardware (desktop PCs, laptop/Tablet
PCs, Interactive Whiteboards, projectors and associated peripherals), and the installation and
maintenance of technology software, ensuring efficient use of computer lab by students, strive to
fully equipped and functional computer labs, arrange awareness raising orientation for college
students to use ICT resources mainly for academic purpose, provide technology integration
training for teachers (in collaboration with University E-Learning coordinating office and ICT
director office) , train instructors on how to work with the e-learning Moodle (in collaboration
with ICT director and the University‘s E-Learning coordinating office), and train college
students on how to work with the Learning Management System. Starting March 2018, the
University entered in an agreement with Lucy academy services P.L.C to run purely online
under-graduate and post-graduate programs. This makes the University the first of its kind in the
country to offer purely online programs at higher education level (www.ju.edu.et and
information obtained from the ICT director of the University on March 2018).

3.3.2.8 Mekelle University (MU)

Mekelle University is found at the town of Mekelle in Tigray region of Northern Ethiopia, at a
distance of 783 Kilometers from the capital city. The merger of the two former colleges: Mekelle
Business College and Mekelle University College established the University in May 2000 by the
Government of Ethiopia (Council of Ministers, Regulations No. 61/1999 of Article 3) as an
autonomous higher education institution. The two colleges have their own historical
developments after beginning from scratch, and have also experienced exhausting ascends and
descends, with voluminous relocations from place to place. At present, MU has an enrollment
capacity of over 31,000 students in the regular, continuing education programme and summer,
evening, distance education and in-service programmes in both undergraduate and graduate
programmes. Mekelle University is thus now a government-funded higher education institution
with an international reputation for teaching and research and with collaborative understanding
with national and international sister institutions. Since its establishment, it has proved to be one
of the fastest growing Universities in Ethiopia. The University has made ready e-learning
infrastructure to implement e-learning system in the teaching and learning process of the
University. An e-learning office is established to coordinate the integration of e-learning system,

70| P a g e
the IT infrastructure, and end users in line with the objective of the University to be a model for
technology hub in the nation. (http://www.mu.edu.et and information obtained from head of the
e-learning unit in October 2017).

3.3.2.9 PESC Information Systems College

PESC Information Systems College is the first privately owned e-learning based college in
Ethiopia. It was established in September 2004. The college comprises of post-graduate and
undergraduate schools. The college has about 2,018 students in under graduate and post-graduate
programs (Information received from the college‘s dean in August 2017). The distance wing of
the college has 15 regional campuses across the nation. The college through its e-learning unit
supports its teaching and learning process in the fifteen outstation campuses. These outstation
campuses are equipped with e-learning facilities and computer labs. The college has made
available e-learning studio in which instructors are recoded with video (using Camtesia) and
these video recorded lectures are uploaded on the learning management system (i.e. Moodle).
Students with their log-in information can access the video lecture, online assignment, online
exams, and various educational resources and administrative announcements via the college‘s e-
learning portal (www.pesccollege.com; information obtained from the head of the e-learning unit
in September 2017).

3.3.3 Sampling method

Theoretical sampling is used as the study‘s sampling method in both qualitative and quantitative
research methods. Theoretical sampling is one of the most common sampling techniques in case
study research (Yin, 2003). Case study research relies on theoretical sampling in which samples
or cases are selected for theoretical, not statistical, reasons and therefore, generalization is made
to theories or constructs (Eisenhardt & Graebner, 2007; Yin, 2003; Eisenhardt, 1989). Thus, the
goal of theoretical sampling is to select informants who have rich information than others due to
factors such as status, role, and past or present exposure informants have with regard to the
research under investigation (Yin, 2003; Eisenhrdt, 1989). The study took this concept of
theoretical sampling while selecting the study‘s participants which is also in line with Wagner et
al. (2008).

71| P a g e
3.3.4 Crafting Instruments & Protocols, and Entering the Field

One of the qualities of case study research is it involves multiple data collection methods to
triangulate findings. Triangulation is defined as the use of two or more independent sources of
data collection methods to complement research findings within a study (Myers, 2013; Yin,
2011). The use of triangulation in case study research provides stronger substantiation of
constructs and hypothesis (Eisenhardt & Graebner, 2007; Yin, 2003). Eisenhardt (1989)
emphasized the importance of multiple data collection methods as follows:

Theory-building researchers typically combine multiple data collection methods. While


interviews, observations, and archival sources are particular common, inductive researchers
are not confined to these choices…Of special note is the combining of qualitative with
quantitative evidence. Qualitative data provides rich description about the research context
and quantitative evidence can indicate relationships which may not be salient to the
researcher. The combination of these data types can be highly synergistic. (p.538)

Furthermore, Mintzberg (1979) stressed that although qualitative methods provides the
foundation for theory building through rich description about the research phenomena, it
uncovers the relationships in the hard data; but it is only through quantitative methods that we
can bridge this gap.

Accordingly, the study used in-depth personal interviews, multiple focus group discussions,
document reviews and the researcher‘s personal observations as qualitative data collection
instruments to meet the first research objective ‗To identify factors that determine e-learning
systems success in Ethiopian higher level educational context‟ and in order to achieve the second
research objective ‗To place the identified factors that determine e-learning systems success in a
model that may help Ethiopian higher education level institutions evaluate their e-learning
systems success‟ questionnaires as data collection instruments were used for the quantitative
research method. Each of these multiple data collection instruments are discussed below:

72| P a g e
a. Interview

An interview is a discussion between two or more people which provides pertinent data to a
given research question and is one of the most common techniques in case study research
(Myers, 2013; Yin, 2011).

In-depth personal interviews with each of the nine ICT directors of the institutions under study
were conducted. ICT directors were purposefully selected because they are considered as one of
the key e-learning stakeholders to get relevant data for the study (Khan, 2005). Open ended
interview questions derived from the seed constructs were used to collect data at depth (see
Appendix 1 for interview checklist). A digital voice recorder was used to record the interviews
made with the ICT directors. Yin (2011) also recommends that interviews aided by voice
recorder help the researcher to focus on the main theme of the research. Following this advice,
the researcher only took some important notes and all the interviews were recorded. Each
interview discussions were made at the their respective research settings (universities) and before
proceeding to the next ICT director/ informant, the interview audio recordings and notes were
transcribed and analyzed in order to determine the saturation point (Yin, 2011; Strauss &Corbin,
2008). Simply put, the theory of saturation is that no new ideas emerge and that coding more
transcripts would only produce repetition of themes or when marginal improvement becomes
small (Morgan, 1997). According to Yin (2003), Eisenhardt (1989) & Glaser and Strauss (1967)
there is no agreed number of cases or respondents to determine the saturation point. They
advised that since the level of saturation point to be reached differs depending on the type of the
research and the research question, the researcher need to judge the point of saturation whenever
no new data emerges by asking additional informants. The researcher took this advice and also
adopted a technique of verifying whether each data instance is confirmed by at least two data
sources to determine the point of saturation which is also in line with the advices of Pan & Tan
(2011) and Miles & Huberman (1984).

b. Focus group discussions

Focus group discussions are used to generate knowledge from group interactions and each group
is comprised of homogenous members (Morgan & Bottorff, 2010). It mainly focuses in group
dynamics providing researchers with elaborated perspectives to the topic under discussion (Kidd

73| P a g e
& Parshal, 2000). Focus group discussions have been suggested as a suitable method for
explorative studies (Wilkinson, 2004; Morgan, 1997). Unlike individual interviews, the group
members are more likely to challenge their views each other, argue for or change their own
views, bring forward issues that are important to them and, thus, helps the researcher to capture
shared meanings (Bryman and Bell, 2003). In effect; it reflects the process through which
meaning is constructed in everyday life and is deemed as especially informative for the purposes
of model development (Bryman and Bell, 2003).

Based on the recommendation of Wilkinson (2004) and Morgan (1997), the researcher formed
three heterogeneous groups (students, instructors, and e-learning /ICT experts) composing
homogenous members within each groups to promote smooth dialogue. Accordingly, student
focus group, instructor focus group and e-learning experts focus group were formed based on
theoretical sampling method. These focus groups are termed as key e-learning stakeholders in the
implementation of e-learning in their respective institutions (Wagner et al., 2008; Khan, 2005;
Thompson and Strickland, 2001). However, one of the great challenges of accessing the target
respondents and forming homogenous groups for each focus group were the country‘s unstable
political situation during the study‘s data collection period (February, 2018- April, 2018).
Accordingly, many of the universities were not having internet access which hinders the
application of e-learning systems in their teaching and learning process. To this end, accessing
actual e-learning users was a major challenge of the study. In order to at least cope-up with this
challenge, the researcher set an inclusion criterion to select respondents of the study with the
help of e-learning experts and ICT directors of the institutions under study. Therefore, the study
subjects were purposefully selected taking in to account: a student respondent of the study
(irrespective of the respondent‘s type of study discipline, year of study, or under-
graduate/graduate level) need to have exposure to at least one e-learning course (as supportive to
the main teaching and learning process or purely online) in his or her study time at the University
and therefore; these types of groups are assumed as homogenous groups under the same label
―students‘ focus group‖. Eom (2011), Lin (2007) and Khan (2005) also used similar inclusion
criteria. Similarly, the same selection criteria were adopted for the instructors‘ focus group.
Instructors in a group were purposefully selected based on the criteria that the instructor needs to
run at least one course by use of e-learning system in a supportive and /or purely online
modalities. With regard to the e-learning experts‘ focus group formation; since most of the IT

74| P a g e
and the e-learning experts under each of their respective institutions perform similar duties
serving as e-learning technical support teams, the researcher treated e-learning experts and IT
personnel as homogenous members rather than categorizing them separately in to different focus
group discussions. One of the critiques of focus group discussions is that one or two people may
dominate the discussion making other members to be overwhelmed (Morgan, 1997). To this end,
the researcher used Torrington‘s (1991) techniques to bring others who are passive in the
discussion sessions to actively participate and intentionally inhibiting the contribution of some
dominant members of the focus group in order to maintain equal contributions among all the
members of the focus groups. By adopting this technique, the researcher did not notice any group
variability within each of the different types of focus group discussions. Table 9 presents the
overall qualitative data sources (appendix 5 presents detail about the numbers of focus group
discussion and in-depth personal interview participants).

Open ended questions were used to probe in-depth data from the focus group discussions and
motivate the focus group members to freely express their views and so that the emergent
constructs would be strongly grounded (see appendix 2a-2c for the focus groups discussion
questions). When participants within a group tend to discuss out of the intended topic of the
study, the researcher re-directed them to focus on the focus group discussion protocols. Beyond
this role, the researcher only served as focus group discussion facilitator (moderator) which is
also in line with the recommendation of Morgan (1997). Digital voice recorder was used to
capture data from all focus group discussions. A typical focus group consisted between four to
eight homogenous members and each focus group discussion sessions should not take more than
90 minutes (i.e. a focus group discussion taking more than 90 minutes may have negative effect
to the extent of losing the attentions of focus group participants in the discussion sessions)
(Morgan & Bottorff, 2010; Morgan, 1997). Although the researcher considered this advice, some
members of the focus group participants did not come on time and some others were absent (i.e.
in spite of persistent reminders made to get them participate as per the scheduled focus group
discussion sessions). In effect, the researcher did not allow the late comers to participate in the
middle of the focus group discussions. The basic reason is that late comers may negatively affect
the smooth flow of the discussion while the discussion is already running.

75| P a g e
All the students‘ focus group discussions were held in class rooms after granted permissions to
use the class rooms from their respective universities; for the instructors‘ focus groups we have
used free staff rooms and when the staff rooms were occupied and inconvenient for
administering the focus group discussions, restaurants and mini-conference rooms were used as
focus group discussion settings. For the focus group e-learning experts, since they are few in
number, we have used their offices and computer labs keeping the seating layout suitable for
focus group discussions.

The theory of saturation was used while running different focus group discussions. To this end,
the saturation point was determined until conducting further focus group discussions did not
bring any new themes. Accordingly; to reach at the saturation point, nineteen focus group
discussions with students, twenty focus group discussions with instructors and eighteen focus
group discussions with e-learning experts with a total number of two-hundred sixty four key
informants and nine different in-depth personal interviews with nine ICT directors across the
nine institutions participated in the qualitative phase of the study (see Table 9, Table 12 and
appendix 5).
Table 9: Qualitative Data Sources

Numbers of focus group Invited and participated numbers of individual focus group discussion
education institutions

personal interviews

discussions participants
with ICT directors
Ethiopian higher

Archival records
Nos. of In-depth

Observation

Invited Focus
Invited Focus

Invited Focus
Focus group

Focus group

Focus group

Participated
Focus group

Participated
Focus group

Participated
Focus group
Instructors

Instructors

Instructors
E-learning

E-learning

E-learning

E-learning

E-learning

E-learning

E-learning

E-learning

E-learning
students

students

students
experts

experts

experts
group

group

group

AMU 1 √ √ 2 3 3 15 11 15 12 13 10
ASTU 1 √ √ 4 3 2 25 22 26 17 12 7
ECSU 1 √ √ 4 3 2 24 20 20 10 9 8
MU 1 √ √ 2 3 2 12 9 21 12 12 7
HRU 1 √ √ 3 3 2 26 18 23 13 12 7
PESC 1 √ √ 4 3 3 34 26 29 18 20 14
AAU 1 √ √ - 1 2 - - 9 5 10 8
JU 1 √ √ - 1 1 - - 8 3 6 3
BDU 1 √ √ - - 1 - - - - 5 4
Total 9 19 20 18 136 106 151 90 99 68

76| P a g e
c. Archival Evidence
Archival evidences are secondary data sources such as documents provide useful sources to
answer the research questions (Cresswell, 2007). These types of data are used as part of a case
study research strategy to triangulate findings along with other types of data sources (Yin, 2011).
One of the challenges of the study with regard to data collection with respect to archival
evidences or formal documents is that the e-learning units in the institutions under study do not
have as such organized documentations. This being one of the challenges of the study, the
researcher attempted to collect relevant documents in line with the study‘s research questions. To
this end, the document collection and review process were undertaken in parallel to in-depth
personal interviews and focus group discussions (see Appendix 3 for document checklist).

d. Observation

As one form of qualitative data collection instruments, observation involves the researcher to
look over what happened, perceptions about the feelings and expressions of the respondents and
the available infrastructure which adds to the richness of data complementing other forms of data
collection instruments (Bhattacherjee, 2012). The researcher guided by observation checklist (see
Appendix 4) took diary notes to record what was observed in the overall data collection process
such as the kind of Learning Management System (LMS) used by the institutions, students and
instructors degrees of access to computers, internet access, and operationality of e-learning unit.
This observation guide is also in line with the recommendation of Khan (2005).

3.3.5 Data Analysis Technique

There is no general consensus towards a standardized qualitative data analysis technique unlike
that of quantitative research technique (Strauss and Corbin, 2008; Miles and Huberman, 1984).
Case study researchers usually face challenges while attempting to disclose each data analysis
procedures and types of data analysis tools they use due to the fact that analyzing voluminous
data is quite difficult to disclose each steps with respect to how the researcher handled each bits
of data towards reaching in to conclusions (Yin, 2011; Yin, 2003; Miles and Huberman, 1984).
To this end, Strauss & Corbin (2008) and Miles & Huberman (1984) stressed that qualitative
data analysis is a creative process and in most of the cases is an art requiring skill while
performing the analysis that leads to the conclusion.

77| P a g e
Data that were collected through in-depth personal interviews, focused group discussions,
observation, and archival documents were transcribed and translated in to English. A qualitative
data analysis tool called NVivo software (Pro version 11) was used to assist the analysis process.
Open coding was used as its coding scheme to categorize similar texts, sentences and paragraphs
based on concepts which is also in line with the recommendations of Strauss & Corbin (2008),
and Miles & Huberman (1984). In order to simplify coding and analysis, participants of focus
group discussions and in-depth personal interviews, and relevant documents were assigned
unique labels and so that it is possible to provide chain of evidence tracing from where each
texts, sentences, and paragraphs were extracted from (see appendix 6a and appendix 6b for labels
assigned to represent each of the focus group participants, and in-depth personal interview
participants & archival documents, respectively). Assigning labels representing each of the study
participants also protects the confidentiality and privacy of the study participants (Yin, 2009). In
effect, the study attempted to maintain ethical standards.

Cross case analysis employing pattern matching was also used as data analysis technique while
developing the proposed model (Yin, 2003). A pattern is a notion that appears recurrently across
all cases and multiple data sources; and a pattern matching is a technique in which you compare
patterns that appear from the empirical data in relation to prior/seed constructs, extant literature,
the researcher‘s assigned dimensions or constructs based on the empirical data (Strauss &
Corbin, 2008; Yin, 2003; Eisenhart, 1989 and Miles & Huberman, 1984). Eisenhardt (1989)
noted this specific data analysis technique as follows:

‗When a pattern from one data source is corroborated by the evidence


from another, the finding is stronger and better grounded…The idea
behind these cross-case searching tactics is to force investigators to go
beyond initial impressions, especially through the use of structured and
diverse lenses on the data. Dimensions or constructs can be suggested by
the research problem or by existing literature or the researcher can simply
choose some dimensions. These tactics improve the likelihood of accurate
and reliable theory, that is, a theory with close fit with the empirical data.‘
p. 541.

The study‘s seed constructs were used as initial basis to categorize or code the empirical data.
And, new constructs that emerged from the data are defined based on the empirical data and
related literature. Texts revealing similar concepts and patterns were categorized. The researcher

78| P a g e
re-read the transcribed Word and Sentence several times and then assigned them to lower level
categories or child nodes. This data reduction process was used to further generate higher level
categories or theoretical constructs that can represent their respective lower level categories. This
coding technique was also adopted by Tajul (2014) and Assefa (2014). Data collection and
analysis is usually an iterative process in qualitative research until no new codes emerged and
that coding more transcripts would only generate repetition of themes (Yin, 2011; Strauss &
Corbin, 2008; Morgan, 1997; Eisenhardt, 1989; Glasser and Strauss, 1967). Accordingly, the
researcher adopted this technique until no new codes emerged.

3.3.6 Shaping Hypothesis

From the cross-case analysis and overall impressions, themes (constructs) and relationships
between the constructs emerged from the empirical data. Eisenhardt (1989) describes: ―A close
fit is important to build a good theory because it takes advantage of the new insights possible
from the data and yields an empirically valid theory‖ p.541. Accordingly; in shaping the study‘s
hypotheses, the researcher further refined the constructs‘ definitions based on literature and the
empirical data. This process was done through constant comparison between data and constructs
across multiple data sources and cross case sites ultimately generating a single, well defined
construct which is distinguished from other constructs. In effect, the researcher attempted to
establish construct validity which is also in line to the recommendations of Yin (2011), Yin
(2003), Shanks (2002), and Eisenhardt (1989). To this end, Eisenhardt (1989) emphasized that
no technique like factor analysis is available to collapse several indicators in to a single construct
measure because the indicators may vary across cases (i.e. not all cases may have all measure);
in effect, qualitative evidence in theory-building research is difficult to collapse in which case
researchers are advised to rely on tables which summarize and tabulate the evidence underlying
the construct. In line to this advice, the study used NVivo software as its database management
tool to manage the qualitative data.

After defining and building evidence for each of the study‘s constructs, the researcher verified
emergent relationships between constructs fit with the evidence in each study case. The
underlying logic is replication, that is, a series of cases as a series of experiments with each case
serving to support or reject the hypothesis. This process is also in line to the recommendation of
Yin (2003). Each case is analogous to an experiment and multiple cases are analogous to

79| P a g e
multiple experiments and in replication logic, cases which confirm emergent relationships
enhance confidence in the validity of the relationships and cases which disconfirm the
relationships often can result in extending the theory (Eisenhardt, 1989). Eisenhardt further
explained that the logic behind defining and building evidence for constructs apply in theory
building is just as is the case for traditional, hypothesis testing work. Overall, the researcher also
took Eisenhardt‘s advice while formulating the study‘s hypotheses.

3.3.7 Enfolding Literature and Reaching Closure

This last step of the study‘s model building process resulted in closure of the entire model
building process. Yin (2003) and Eisenhardt (1989) advised that researchers should stop adding
cases when theoretical saturation is reached. Eisenhardt (1989) further describes it as follows:
“reaching closure is similar to the idea of ending the revision of a manuscript when the
incremental improvement in its quality is minimal…The final product of building theory from
case studies may be concepts, a conceptual framework, or propositions or possibly mid-range
theory or on the downside, the final product may be disappointing by simply replicating prior
theory” p.545.

As part of this step, the researcher tried to link the findings of the study with extant literature by
comparing the emergent concepts and relationships between constructs with extant literature
(enfolding literature). This involves asking what is similar to, what is different, and why.
Eisenhardt (1989) describes this last step as: “An essential overall, tying the emergent theory to
existing literature enhances the internal validity, generalizability and theoretical level of theory
building from case study research” p.545.

Adopting Eisenhardt‘s theory development process, the result was the study‘s proposed model
(as presented in chapter 4, figure 48).

3.3.8 Model Testing

Eisenhardt (1989) describes strengths of theory building from cases as: „likelihood of generating
novel theory, developing emergent theory that would come-up with testable constructs having its
own measurable items and hypothesis that can be tested, generating a resultant theory that is
likely to be empirically valid being consistent with empirical observations i.e. intimate

80| P a g e
interaction with actual evidence often produces a theory which closely mirrors reality‟ p.547.
Furthermore, by reconciling evidence across cases, multiple sources of data and literature
increase the likelihood of creative reframing into a new theoretical vision.

However, a research output needs to be evaluated in terms of some pre-defined criteria.


Accordingly, Yin (2003) proposed quality criteria for qualitative case study research design as
presented in Table 10 and discussed below.

Table 10: Criteria for Judging the Quality of Qualitative Case Study Research (Yin, 2003)
Phase of research in
Tests Case Study Tactic
which tactic occurs
Construct Validity Use multiple sources of evidence Data collection
Establish chain of evidence Data collection
Have key informants review draft case study Composition
report
Internal validity Do pattern matching Data analysis
External validity Use replication logic in multiple case studies Research design
Reliability Use case study protocol Data collection
Develop case study database Data collection

Construct validity is concerned with the issue of whether empirical data in multiple situations
leads to the same conclusions and is improved by multiple sources of evidences (Shanks, 2002;
Yin, 1994). Accordingly; in order to maintain construct validity, the researcher used multiple
sources of evidence from personal interviews to focus group discussions and from archival
evidences to observation in order to triangulate the study findings; established chain of evidence
describing how the conclusion was reached presenting the case study protocols and case study
database (i.e. the study used Nvivo software as its database management tool) and finally, the
case study report was further verified by the following key informants of the study: nine e-
learning experts of the institutions under study; ICTD of the Addis Ababa University; ICTD of
Ethiopian Civil Service University; and ICTD of Adama Science and Technology University
reviewed the draft case study report. Their comments were incorporated in the final case study
report.

81| P a g e
The researcher used pattern matching as its qualitative data analysis technique in order to
enhance its Internal validity. With regard to External validity, the researcher used replication
logic in multiple case studies (i.e. nine higher education institutions adopting e-learning were
purposefully selected until saturation point is reached). Finally, Reliability issues were
attempted to maintain by using case study protocols for each of the multiple data sources (as
annexed in appendix 1, 2, 3 and 4) and developed case study database with the aid of the
qualitative data analysis software. These data sources are readily available in a way that others
can verify the result.

Even though the researcher attempted to meet the empirical data quality criteria as proposed by
Yin (2003), the intensive use of qualitative empirical evidence in case study research can result
in a theory which is rich in detail but lacks the simplicity of overall perspective (Weber, 2012;
Yin, 2003; Eisenhardt, 1989). Since qualitative findings lack quantitative gauges such as
statistical regression techniques, it may be difficult to assess which are the most important
relationships and which are simply idiosyncratic to a particular case (i.e. lacking one of the prime
quality criteria of a good theory: ―parsimony‖) (Eisenhardt, 1989).

Eisenhardt (1989) acknowledges the weakness of the theory development technique from case
study research as: ―Some of the features that lead to strengths in theory building from cases study
research can lead to weaknesses… For example, the intensive use of empirical evidence can
yield theory which is overly complex... A hallmark of good theory is parsimony.‖ P.547.
Furthermore, Bhattacherjee (2012) also emphasizes ―Parsimony prevents scientists from
pursuing overly complex or outlandish theories with endless number of concepts and
relationships that may explain everything but nothing in particular.” P.8.

To fill the weakness of building theory from case study research technique, the researcher
employed Structural Equation Modelling (to be discussed further in the preceding section) to
further validate the proposed model and enhance the quality of the theory building process.
Mueller & Urbach (2013) and Weber (2012) also advised that a good theory should be clearly
presented to foster empirical tests and demonstrate a sufficient balance between being too narrow
and too general in its coverage.

82| P a g e
Therefore; in order to quantitatively validate the proposed model using Structural Equation
Modeling (SEM), the researcher has developed questionnaire items based on the findings of the
qualitative phase of the study and relevant literature sources (see appendices 7a & 7b). The
questionnaire items were measured using a five point Likert scale: 1= Strongly Disagree (SD),
2= Disagree (D), 3= Neutral, 4= Agree (A), 5= Strongly Agree (SA). In order to minimize data
outliers and suspicious response patterns, two options ‗Not Applicable‘ and ‗Don‘t know‘ were
added. To this end, a ―Not Applicable‖ option is provided in cases where the respondent feel that
the question is not applicable to his/her case. Similarly, to capture respondents‘ response that do
not have true opinion about a specific question a ―Don‘t know‖ option is provided in the
questionnaire. Accordingly, two types of questionnaires (students‘ and instructors‘
questionnaires) were prepared to measure the constructs of the proposed model via items
developed in the questionnaires.

In line to developing the study‘s questionnaire items for model testing purpose, Pilot study is an
important step before going in to the actual data collection process (Neuman, 2005). According
to Bryman (2012), in-depth personal interviews are vital to pilot test and gain insights about data
collection instruments, especially when the main study employs closed questions. To this end,
the draft questionnaires were pilot tested (i.e. qualitatively via in-depth personal interviews)
before actually distributing to the study participants in the final model testing phase of the study.
The questionnaires were provided to thirteen instructors, fifteen students, nine e-learning experts,
seventeen ICT staff, seven ICT directors and fourteen IT PhD students having profound
experiences with e-learning systems in order to capture their feedback about the items of the
questionnaires. The pilot study participants were interviewed with regard to the overall contents
of the questionnaires. Therefore; according to the pilot study results, some items were modified
and still some others were deleted to reach at the final items of the two types of the
questionnaires (see Table 11 for the summary of pilot study results, appendices 7a &7b to see
lists of validated items & construct measures they belong to and items assigned codes for ease of
statistical analysis and appendices 8a & 8b for the final distributed questionnaires).

83| P a g e
Table 11: Results of Pilot Study for the Questionnaire Items

Number of Total numbers of items


Questionnaire Modified Deleted Added
items in draft in the final version of
Type items items items
questionnaire the questionnaires

Instructors 44 10 3 - 41

Students 46 8 4 - 42

Since more can be learned by measuring actual behaviors / scenarios with real systems than
intentions to use (Crossler et al, 2013), the proposed model was tested on actual users (students
and instructors) of e-learning systems of the institutions under study. However, the study period
(February, 2018 – April, 2018) when the data collection was conducted was also a period
characterized by political instability in the country resulting in frequent declarations of State of
Emergencies which negatively influenced the application of e-learning systems in the teaching
and learning process due to frequent internet network interruptions in most of the institutions
under study. To this end, accessing actual e-learning users was a major challenge of the study.
To cope with this challenge, the researcher set similar inclusion criteria as employed in the first
phase of the study adopting theoretical sampling method. Therefore, the study subjects were
purposefully selected taking in to account: a student respondent of the study (irrespective of the
respondent‘s type of study/ discipline, year of study, or under-graduate/graduate level) need to
have exposure to at least one e-learning course (as supportive to the main teaching and learning
process or purely online) in his or her study time at the University. Similarly, the instructor
respondent of the study need to run at least one course while teaching in his/her respective
institution under the study (see Table 12 for the overall list of total study participants under the
different parts of the study research phases). Furthermore, respondents who were involved in the
first phase of the research were excluded from the model testing phase in order to avoid any
possible biases. This selection criterion is similar to other e-learning systems success studies
such as Lwoga (2014), Eom (2011), Ozkan & Koseler (2009), Lin (2007), and Khan (2005).

84| P a g e
Table 12: Overall Study Participants
Type of Number of Number of
Response
Type of study participants of invited study actual study
rate
study participants participants
Students (focus
group discussion 136 106 77.9%
participants)
Instructors (focus
group discussion 151 90 59.6%
participants)
E-learning experts
Qualitative Study (focus group
(First Phase of The 99 68 68.6%
discussion
study) participants)
ICT directors
(in-depth personal 9 9 100%
interviewees)
Students 21 15 71%
Instructors 15 13 87%
IT PhD students 16 14 87.5%
Pilot study E-learning experts 11 9 82%
ICT staff 22 17 77.27%
ICT directors 11 7 64%

Quantitative Study Students 115 96 83.4%


(Validation/Second Phase
of the Study) Instructors 96 85 88.5%
Total number of study participants 702 529 75.3%

3.3.8.1 Model Testing Quantitative Data Analysis tools

Statistical Package for the Social Sciences (SPSS) version 20 was used to code the collected
questionnaires data and perform descriptive statistics. On the other hand; Structural Equation
Modeling (SEM) employing Smart PLS version 3.0 software was used to validate the proposed
model in terms of both structural and measurement model tests. SEM-PLS is regarded as a
voodoo modeling that offers great benefits to type of studies emphasizing model testing by
providing high degree of computational efficiency (Hair et al, 2012). Similarly, according to
Awang et al. (2015): ― SEM-PLS manages to execute data analysis using small sample size (less
than 100) rather than Covariance Based-SEM method which requires a minimum of 100” p.60.
Furthermore, Ubrach and Ahlemann (2010) and Boulesteix & Strimmer (2007) identified three
advantages for researchers in using SEM-PLS:

85| P a g e
1. Efficient to handle large number of specific construct‘s indicators.

2. High computational and statistical efficiency

3. Effectively and efficiently works very well even with small sample sizes

Moreover, Hair et al., (2012) reviewed 204 high impact scientific journal papers employing
Smart-PLS and classified them in to two categories as:

1. To handle non-normality data (102 studies, which is about 50% of the total reviewed
journals)

2. To handle small sample sizes (94 studies, which is about 46.8% of the total reviewed
journal papers).

Hsu et al., (2006) also reviewed several journal papers which employed SEM-PLS with different
sample sizes. As a result, they found out that SEM-PLS is statistically efficient irrespective of
the number of sample sizes for a given study.

Overall, SEM-PLS is best suited to test an emergent theory or an extension of an existing theory,
to test non-parametric distributions, to studies with small sample sizes, when the analytical focus
is to identify relationships between constructs, when numbers of indicators per construct are one
or more, can be used as a method for dimension reduction (similar to principal components
analysis, where the aim is to condense the measures so they adequately cover a conceptual
variables‘ salient features), can handle complex models with high statistical efficiency, and
focuses on prediction and generalization (Hair et al., 2017; Shmueli et al., 2016; Hair et al.,
2013; Hair et al., 2012; Hair et al., 2011; Ubrach and Ahlemann, 2010). Due to these distinct
benefits of SEM-PLS, Hair et al. (2011) named this statistical software as ―a Silver Bullet‖.

Accordingly, the study employed SEM-PLS to validate the proposed model on two levels:
students and instructors in a total sample size of one-hundred eighty one respondents which is
above the minimum sample size requirement for SEM-PLS.

Several research scholars have identified two types of constructs: reflective and formative. These
types of constructs explains the relationships between an indicator and its associated latent
constructs (Hair et al., 2013; Gefen et al., 2011; Urbach and Ahlemann, 2010; Straub et al.,

86| P a g e
2001). The basis for the classification of these constructs depends on the researcher‘s theoretical
expectation and the constructs‘ definitions (Mackenzie et al., 2011). According to this study; the
proposed model constructs causes variance on their respective indicators which are assumed as
reflective unlike formative constructs. This is also in line with the recommendations of Henseler
et al. (2016), Hair et al. (2013), Urbach & Ahlemann (2010), and Cenfetelli & Bassellier (2009).

According to Hair et al. (2013) and Urbach & Ahlemann (2010), the validation process of Smart-
PLS involves two components of Structural Equation Modeling (SEM): the measurement and the
structural models. The measurement model measures the relationship between the constructs and
its respective indicator variables. Table13 describes the measurement model tests and its
respective threshold values while validating the study‘s proposed model.

87| P a g e
Table 13: SEM-PLS Measurement Model Test Threshold Values

Measurement
Model: Reliability
and Validity Tests Definition Threshold values

An item loading is usually considered


Refers to the extent to which a
high if the loading coefficient is above
measurement item relate to its
0.6; In order to ensure unidimensionality
associated latent construct better
of a measurement model, any items with a
Unidimensionality than any other constructs to which
low factor (less than 0.6) loading should
it doesn‘t belong to (Hair et al.,
be deleted (Asyraf & Afthanorhan, 2013;
2011; Ubrach and Ahlemann,
Awang et al., 2010; Ubrach and
2010; Awang, 2010; Henseler et
Ahlemann, 2010; Henseler et al., 2009).
al., 2009).
Reliability refers to dependability
or consistency of measurement;
Internal The most recommended threshold value is
Cronbach alpha & Composite
Consistency greater than 0.7 for each of the Cronbach
reliability are the most common
(Construct alpha and the Composite reliabilities
measures of internal consistency
Reliability) (Hair et al., 2011; Nunally, 1994).
(Hair et al., 2012; Saunders et al.,
2009; Nunally, 1994).
Refers to the degree to which
-The most widely used criterion of
individual items reflecting a
convergent validity is the Average
construct converge in comparison
Convergent Validity Variance Extracted (AVE) with threshold
to items measuring different
value greater than 0.50. (Fornell &
constructs (Urbach & Ahlemann,
Larcker, 1981).
2010).
The criterion is that a Latent Variable
(LV) to share more variance with its
Measures the degree to which assigned indicators than any other latent
Discriminant measures of different constructs variable. Accordingly, the AVE of each
validity differ from one another (Urbach & LV need to be greater than the LV‘s
Ahlemann, 2010). highest squared correlation with any other
LV (Urbach & Ahlemann, 2010; Fornell
& Larcker, 1981).

On the other hand, the structural model measures the relationship between the proposed model‘s
constructs. Table14 describes the structural model tests and recommended threshold values while
validating the study‘s proposed model.

88| P a g e
Table 14: SEM-PLS Structural Model Test Threshold Values

Assessment of the
Definition Threshold values
structural model
Values of approximately .670 are
Attempts to measure the
Predictive Power/ considered substantial, values around
explained variance of an
Coefficient of .333 moderate, and values around .190
LV relative to its total
Determination (R2) weak (Ringle, 2012; Ubrach and
variance.
Ahlemann, 2010)
It explains the strength of Three levels of cut-off were adopted to
relationship among latent assess the strength of path coefficients :
Path Coefficients constructs (Hair et al., 0.2 weak; values between 0.2 and 0.5 is
2013; Ubrach and moderate ; and more than 0.5 is strong
Ahlemann, 2010) (Ringle, 2012; Cohen, 1988)
Measures the effect size of a path in the
It measures if an SEM by use of Cohen‘s f2 (Cohen,
independent latent variable 1988). F2 values between .02 and 0.15,
has substantial impact on a between 0.15 and 0.35, and greater than
Effect size
dependent latent variable 0.35 signifies that an exogenous latent
(Hair et al., 2013; Ubrach variable has a small, medium, and large
and Ahlemann, 2010). effect on an endogenous latent variable,
respectively (Gefen et al., 2000).
It measures the predictive
It uses Q2 statistics. The recommended
relevance of a block of
Predictive relevance threshold value is Q2 greater than 0
manifest variables (Hair et
(Fornell & Cha, 1994).
al., 2013)

89| P a g e
3.4 Chapter Summary

This chapter presented the research methodologies employed in this study. As described in
section 1.4, the study objectives are to identify factors that determine e-learning systems success
and place them in a model. Although there are quiet large numbers of studies in e-learning
systems success, there remain considerable debates with regard to contextual and holistic issues
which resulted in lack of consensus towards what determines e-learning systems success across
various settings. Adopting positivism research paradigm, Eisenhardt‘s theory building process
was adopted to develop the study‘s proposed model. Accordingly, seed constructs were
identified from the updated ISS model in order to categorize relevant constructs as ‗scaffolds‘
prior to entering into the field to ground it with extant literature. Qualitative Case study research
method was adopted to further explore factors that determine e-learning systems success. Nine
EHEIs were selected based on theoretical sampling method. Quantitative research method,
employing Structural Equation Modeling (SEM-PLS), was selected to validate or test the
proposed model. Overall, this chapter presented the research design, model exploration and
extension methods, data collection instruments and data analysis techniques related to the
development and validation of the proposed model. The model that emerged from the empirical
data will be the subject of discussion of the next chapter.

90| P a g e
Chapter Four
Qualitative Data Analysis and Findings
4.1 Introduction

This chapter presents the findings from the qualitative phase of the study. It describes factors that
determine e-learning systems success in EHEIs and placed them in a holistic model for further
quantitative validation. Hypotheses were formulated based on the qualitative empirical findings
grounded with extant literature in order to further test the relationship between the categories
/constructs.

The chapter is organized in to three sections: Section 4.2 presents the qualitative data analysis
and findings, Section 4.3 presents the study‘s hypothesis and Section 4.3 presents the proposed
model subject for further quantitative validation.

4.2. Data Presentation, Analysis and Findings

The qualitative data, which were collected through in-depth personal interviews, focus group
discussions, observation, and document reviews, were clustered in to relevant concepts. Group of
relevant concepts were first clustered in to lower level categories and then to higher level
categories/themes in a manner that would answer the research questions of the study. This open
coding approach to analyze the qualitative data is in line with the advices of Yin (2011),
Eisenhardt (1989) and Miles & Huberman (1984). NVivo Pro software version 11 was used as
qualitative data analysis tool. The researcher re-reads the transcribed notes and identified
concepts and then grouped in to their respective categories. Definitions for the lower level
categories were taken from the empirical data and relevant literature of the study. Texts which
belong to the definitions or concepts of the lower level-categories were clustered together under
their respective lower level-categories. Then, similar lower level-categories were further grouped
to higher-level categories. Similarly, these higher level categories were defined from the
empirical data and relevant literature. Concepts were used as bases to define the lower and the
higher-level categories of the model as emerged from the empirical data. This is also in line with
Lewis‘s (1970) recommendation of defining theoretical constructs. Furthermore, definitions of
the lower level and the higher level categories were also validated by a group of international e-
learning experts at IEEE conference in Portugal and latter published (see appendix 9).

91| P a g e
References (i.e. presented by Nvivo) represent the number of frequencies of concepts/categories
mentioned by several data sources such as from in-depth personal interviews and focus group
discussion respondents and data collected through document reviews and observation notes.
References also indicate the relative importance of one category over other categories.
Accordingly, eight higher level categories (constructs) and 30 lower-level categories were
identified. The result is a proposed e-learning systems success model subject to further model
testing.

4.2.1. Institutional Support Quality

Seven lower-level categories: Technical support, Quality Assurance Mechanism, Motivation,


Computer Training, Intellectual Property Protection, E-learning Policy and Availability of E-
learning Infrastructure were identified from the empirical data. These lower- level categories
were further clustered in to a higher level category which is termed as Institutional Support
Quality. Table 15 presents summary of the definitions of the lower-level categories and their
respective higher level-category as well as representative in-depth personal interviews/ focus
group discussion and other logs taken from multiple data sources of the study.

92| P a g e
Table 15: Institutional Support Quality

Construct or Total
Higher level- Lower-level categories Representative logs number of
category similar logs

“The University must avail technical staff in e-


Technical support: represents the
learning in order to provide us prompt
level of support provided by e-
assistance while we face technical and usability
learning/IT technical personnel to 117
problems related with the use of the e-learning
users of the e-learning system (Selim,
system”.
2007; Khan,2005 and empirical data)

Quality Assurance Mechanism:


refers to the availability and
development of quality assurance “Periodic evaluations of the e-learning system
parameters with respect to evaluating implementation are vital to sustained usages 29
the e-learning system success (Khan, and satisfaction of users of the system”.
2005 and empirical data).

- “Because it is difficult to prepare e-learning


Institutional contents, record video lecture, respond to
Support Quality: online students‟ questions/chat, there must be
Motivation: represents financial and
The overall support financial and non-financial incentives be
non-financial incentives provided to
of the institution provided to instructors”.
relevant e-learning system providers 100
towards sustained - “Technical staff and other relevant e-learning
and users of the system (Khan, 2005
implementation of stakeholders must be provided some form of
and empirical data)
e-learning systems recognition as motivation in relation to their
(Khan, 2005 and commitment while successfully implementing
enperical data) the system”.
Computer training: The amount of
specialized instruction and practice
“There must be regular computer/ IT training
that is afforded to the learner to
in order to help us be more familiar with the
increase the learner‘s proficiency in
system and understand the usefulness of e- 96
utilizing the computer capability that
learning that would motivate us to frequently
is unavailable (Bhuasiri et al., 2012;
use the system”.
empirical data)

-“The issue of protecting our e-learning


contents or teaching materials (copy right
issues) is critical”.
Intellectual property protection: the
protection of intellectual outputs such
-“Everything on the web is highly vulnerable to
as e-learning contents of instructors
be copied and used publicly”. 23
and other stakeholders from
unintended users (Selim 2007;
empirical data).

93| P a g e
-“Unless or otherwise e-learning policy in the
planning, preparation, implementation and
other related aspects are not well articulated
and communicated to all concerned e- learning
E-learning policy: setting overall stakeholders, it would be difficult to assure its
guidance for e-learning procedures, smooth implementations.
implementation, control, and decision -“The policy should clearly involve
102
making (Khan, 2005). motivational elements to be provided (support
to be provided to all instructors, students and
related academic staff even through IT
training); accountabilities and responsibilities;
quality assurance mechanisms; Intellectual
property rights and so on”.

Availability of adequate e-learning


infrastructure: the availability of
readymade e-learning hardware,
“The e-learning system by itself does not have
software, and other support
any value if all the required hardware and
infrastructure such as internet and 187
software resources are not well organized and
electric power and other utilities to
ready to support the system”.
support e-learning systems
implementation (Ozkan &Koseler,
2009; Khan, 2005; empirical data)

This higher-level level category received a total of 654 references within and across case analysis
from various data sources of several focus group discussions, in-depth personal interviews,
observational notes and document reviews. It is one of the most cited construct among all the
factors identified determining the success of e-learning systems (see Figure 18). Analysis of its
lower-level categories shows that a total of 187 references were made to E-learning
infrastructure from 27 data sources; Technical support received 117 references from 27 data
sources; E-learning policy received 102 references from 26 data sources; Motivation received
100 references from 25 data sources; 96 references were received for IT training from 27 data
sources; Quality assurance Mechanism received 29 references from 18 data sources and
Intellectual Property Protection received 23 references from 14 data sources.

94| P a g e
Fig.18 NVivo‘s Data Analysis Results of Institutional Support Quality (Source: NVivo‘s output, 2018)

4.2.1.1. Technical Support

This lower level category is one of the most cited categories under the institutional support
quality with 117 references made from 27 data sources from within and across cases analysis
(see Fig.19).

Fig.19 NVivo‘s Data Analysis Results of Institutional Support Quality: Technical Support (Source:
NVivo‘s Output, 2018)

Prompt assistance from technical departments of the institutions under study was considered as
one of the success factors of e-learning systems. Students and instructors want prompt support

95| P a g e
from the concerned technical department. Students and instructors with low-computer self-
efficacy usually feel that use of e-learning system is a difficult process. To this end, minor
technical/usability problems which could not be fixed on time by the technical department
negatively influenced the successful implementation of e-learning systems. Accordingly, several
discussion participants from the focus group students across the institutions under study
mentioned (i.e. represented by their respective codes/labels at the end of each of the students‘
quotations): The information technology department/services must always be responsive to provide
all the technical assistance with regard to the learning management systems and other e- learning
technologies available to help us (AMU-SFG-I-1). The University must be committed enough to
fulfill all the infrastructural and technical support whenever we faced problems while we use the
online system (from registration to learning to accessing our grades online) (ECSU-SFG-II-1). I do
not know even to whom to contact whenever I am faced with technical problems while I use the
system (HRU-SFG-I-1). The technical team needs to provide us all the solution while we use the LMS
and even when some hardware and software failures occur (PESC-SFG-I-). The IT department is
working only during office hours, when we faced such problems with use of the system or its
application; we do not usually get prompt response that really discouraged us to frequently use the
system (MU-SFG-I-1). From the students‘ focus group discussions, it can be understood that e-
learning system by itself is not an end to the system success but the availability and prompt
responsiveness of technical support unit is also vital to the success of e-learning systems. Prompt
technical support to students increases the likelihood of use of the system.

Similar views were forwarded by instructors from several types of instructors‘ focus group
discussions within and across-cases: The IT technical team (e-learning team) needs to provide
prompt support to instructors whenever they need technical assistance (PESC-ISFG-I-1).
Instructors want a responsive top management in all aspects of supports of e-learning user
requirements including a responsive IT service (MU-ISFG-II-1).Availability of dedicated
technical service personnel to provide prompt technical support to instructors while they develop
e-learning contents or running e-learning courses is critical to the success of the e-learning
system (AAU-ISFG-I-1). Strong communication of the IT department with students and
Instructors is vital for the positive perception and use of e-learning systems (AMU-ISFG-I-1).
The more the IT service is responsive, the more we will be satisfied with the system (ECSU-
ISFG-III-1).

96| P a g e
In-depth personal interviews with ICT directors across-case analysis also show: Lack of adequate
numbers of technical personnel to promptly support users of the e-learning system is an ever
challenging problem to the success of e-learning systems (AMU-ICTD). There is high rate of
labor turnover in the ICT department due to unattractive salary to promptly assist students and
instructors on demand (PESC-ICTD).

Focus group e-learning experts also indicated: The e-learning systems must be available and
serviceable upon demand (ECSU-EEFG-I-2). Availability of dedicated technical service
personnel to assist instructors and students has paramount importance in the effective
implementation and realization of the benefits of e-learning systems (AAU-EEFG-II-4).

Document review sources (as represented by codes) also show: Improving prompt service from
all e-learning service providers, attracting and retaining e-learning experts are vital to
sustainably implement e-learning systems across the University (AMU-ADC1). Timely response
from all e-learning technical personnel to users‟ enquiries and immediate feedback when facing
problems in the e-learning system is vital to e-learning systems success (ECSU-ADC1).
Technical support staff-turnover is a threat to the e-learning operation and it is recommended to
develop systems for retaining a sufficient number of competent technical e-learning staff
members at the e-learning competence center of the University at all times (HRU-ADC1).

Summarizing all the data sources (document reviews, in-depth personal interviews with ICT
directors, and Focus group discussion with students, instructors and e-learning experts) shows
that a dedicated technical support from e-learning staff is essential for the smooth
implementation of e-learning systems. Top management needs to provide attractive salary to
retain high qualified technical team who can support prompt technical assistance to e-learning
users in the teaching and learning process.

97| P a g e
4.2.1.2. E-learning Infrastructure

This lower level-category is the most cited factor under the Institutional support quality factor
that determines the success of e-learning systems (see Figure 18). This lower-level category
received a total of 187 references from 27 data sources.

Fig. 20 NVivo‘s Data Analysis Results of Institutional Support Quality: Availability of E-Learning
Infrastructure (Source: NVivo‘s Output, 2018)

Successful implementation and sustained usages of e-learning systems requires dedicated


hardware and software for ease of use of the system at any time and place. Without sufficient e-
learning infrastructure, the smooth implementation of e-learning systems would be out of reach.
To this end, students under the focus group discussions made the following remarks: it gives me
lots of pleasure to explore more about a given course through the University‟s Learning
management System (LMS). However, use of the system is not reliable due to electric power
interruption which discourages us from using the system (HRU-SFG-III-5). If there is no internet
network, we will be unable to submit our assignments on time or even contact our advisors
(AMU-SFG-II-1). The University should make available all the hardware and software issues to
use the system, if the system is to be used frequently (ECSU-SFG-IV-4). I remember that when
we were having exams online, a network fails and the exam was, finally, postponed which was
really annoying wishing that we had the exam in the traditional exam delivery modality (ASTU-
SFG-III-5). Me too…I remember that electric power was interrupted and we also had the same

98| P a g e
problem in the middle of chatting with our instructors and also when we download relevant
learning materials , electric power was gone ,we did it again which consumes our time and
become discouraged from frequently using the system (ASTU-SFG-III-2).

Students also mentioned problem of computer access: We do not have sufficient numbers of
computers (PCs) in the computer lab (PESC-SFG-I-4). Even among the PCs in the University‟s
computer lab, some of them are completely not working or they do not have reliable internet
connection (HRU-SFG-II-5). Most of our friends do not have laptops, Tablets and Smart mobile
phones and they become always feel isolated for not having these electronic devices whenever
the computer lab is closed (AMU-SFG-I-6).

Similarly, instructors under the focus group discussions mentioned: Lack of adequate e-learning
lab unit and dedicated IT technicians make e-learning difficult to be sustainably used (ASTU-
ISFG-III-4). Uninterrupted internet connection, guaranteed supply of electric power, availability
of computers to students and instructors are all important factors for the system to be useful to
be used so often ((PESC-ISFG-I1-3). The e-learning system by itself does not have any value if
all the required hardware and software resources are not readily available to be used at any
time (JU-ISFG-I-2).

Focus group e-learning experts forwarded similar remarks: The availability of Content
development center and e-lecture halls are vital for the use of e-learning systems for teaching
(JU-EEFG-I-1). The availability of e-learning technological infrastructure and making it
available to students and instructors has a positive impact in the sustainability of the systems use
among students and instructors (MU-EEFG-II-2). Management of server, ICT Pools and
adjustment of class schedules are also the most recurrent problems as ICT pools are not under
the direct authority line of the e-learning center of the University (ECSU-EEFG-I-1).

In-depth personal interviews with ICT directors also indicated: assume that students are
downloading their assignments and network fails consistently with poor internet connection, in
this case students will be in a difficult position to understand the usefulness of e-learning system
in spite of all the attractive interfaces and well-designed learning management system available
in the e-learning systems modality (PESC-ICTD). E-learning support resources such as ease of
access to computers lab and internet access would make the e-learning systems valuable among

99| P a g e
its users (ECSU-ICTD). I remember guests from one of our foreign affiliated universities were
suppose to visit our e-learning status and I was really worried in case electric power fails before
we demonstrate the system which could have damaged the reputation of the University (ASTU-
ICTD). The LMS must always be updated with recent versions (JU-ICTD).

Document reviews (Archival evidences) and observation notes also show that: The limited ICT
pools which are even centralized (5 ICT pools with nearly 80 PCs for about 17,000 students) are
too few to support the e-learning systems even at one of the pioneer institutions in Ethiopia at
Adama Science and Technology University (ASTU-ADC1).

Overall data analysis result from all the data sources show that even though there is great
ambition to implement e-learning systems in those institutions under the study, making ready all
the required e-learning infrastructure such as computers and other related e-learning
technologies, customized software, electric power, and other related e-learning support
infrastructure are the most cited determinant factors for e-learning systems success.

4.2.1.3. E-learning Policy


This lower-level category is also one of the most cited factors receiving 102 references from 26
data sources (see Figure 21).

Fig. 21 NVivo‘s Data Analysis Results of Institutional Support Quality: E-Learning Policy (Source:
NVivo‘s Output, 2018)

The availability of E-learning policy plays significant role in enforcing rules and regulations
towards integrating e-learning courses with the overall course curriculum. It will also promote
efficient coordination among various e-learning units. This was evident from the interview logs,

100| P a g e
focus group discussions and texts from the archival evidences and observation notes. Students
under the focus group discussions made the following remarks: Learning via e-learning system
is optional not mandatory (ECSU-SFG-IV-2). Our instructors do not usually encourage us to
use the LMS (HRU-SFG-II-3).

Similarly, instructors under the focus group discussions mentioned: The policy should clearly
involve incentives to be provided in financial or non-financial terms; accountabilities and
responsibilities; quality assurance mechanisms; Intellectual property rights and so on (AAU-
ISFG-I-5). E-learning policy that clearly shows its implementation guidelines plays great role in
sustainably using the system by all e-learning stakeholders; in effect, everyone would be
dedicated to enhance the academic performance of the students via e-learning (ECSU-ISFG-I-
1).Getting the e-learning technology or system in to the teaching and learning process is very
important that should be part of any University‟s e-learning policy which can be termed as one
of the variable of top management commitment by making it more institutionalized (HRU-ISFG-
III-2).

Focus group e-learning experts forwarded similar views: Unless or otherwise; e-learning policy
on the implementation aspect is not well articulated through policy, it would be difficult to
assure its smooth implementations (ASTU-EEFG-I-2). Depending on the course, e-learning must
be included in the course curriculums (ECSU-EEFG-III-1). E-learning must be designed in a
more supportive system with other educational and IT services because e-learning is a teaching
and learning support tool (HRU-EEFG-I-3). E-learning cannot stand alone because it requires
support from other academic and administrative departments (BDU-EEFG-I-2).

In-depth personal interviews with ICT directors also indicate: The required budget is there,
resources are there. So, we do not think that it is lack of financial issues but the top management
need to be committed enough to allocate the budget for the effective and efficient implementation
of e-learning systems (AAU-ICTD). The concepts of BPR and Kaisen as ways of improving
efficient business processes were relatively successful in most of Ethiopian public universities
due to top management commitments of the universities…The same case needs to be
implemented by University management to sustainably implement e-learning across the teaching
and learning process of the University (AMU-ICTD). Universities should enforce rules and
regulations to integrate e-learning in the teaching and learning process depending on the course

101| P a g e
to sustainably implement e-learning (JU-ICTD). E-learning, just like mobile banking in the
banking sector, is another business model in higher education sector which requires strong
institutional support backed up by e-learning policy (BDU-ICTD).

Our document reviews from the concerned offices of institutions under study also shows: some
of the institutions under study have developed e-learning policy and other are in the process of
developing the policy but we could not come across with EHEIs having an operational e-learning
policy to support the sustained implementation of e-learning systems.

Our analysis result from all the relevant data sources show that developing and implementing e-
learning policy as an element of institutional support quality to structure the effective and
efficient implementation of e-learning systems is one of the critical factors which determine the
success of e-learning systems.

4.2.1.4. Motivation

This lower level category is one of the most cited factors receiving 100 references from 25 data
sources determining the success of e-learning systems (see Fig. 22).

Fig. 22 NVivo‘s Data Analysis Results of Institutional Support Quality: Motivation (Source:
NVivo‘s output, 2018)

Financial and non-financial incentives provided to instructors in order to get them committed in
the e-learning development and implementation processes plays vital role in the success of e-
learning systems. Other e-learning support staff also needs to be motivated (i.e. financially or

102| P a g e
non-financially) in order to create sense of ownership among the e-learning stakeholders. This is
evidenced by 25 data sources and 100 references within and across cases. Several students from
the focus group discussions mentioned: Instructors do not usually motivate us to attend our courses
via e-learning systems (MU-SFG-II-2). Instructors do not have a positive attitude towards getting us use
the system (HRU-SFG-III-2). The IT technicians are not willing to support us while we ask them
for any technical assistance (ECSU-SFG-I-3).

Similarly, instructors under the focus group discussions made the same remark: availability of
motivational schemes for Subject Matter Experts (SMEs) who teach via the portal in a blended
learning approach or purely online is vital to the success of e-learning systems (AAU-ISFG-I-2).
If there is no incentive for e-learning instructors, the degree of success of e-learning systems will
be low (HRU-ISFG-II-4). The University needs to create motivational system for instructors
(either financially or reducing their required course loads) to enhance their sustained systems
usage (ECSU-ISFG-I-2).

Focus group e-learning experts forwarded similar views: Lecturers should be provided incentives
for their e-learning content development efforts (ECSU-EEFG-I-3). Competitive salary and
other benefit packages must also be in place for IT technical staff and e-learning experts while
supporting students and instructors during out of regular office hours... This is very important to
get them dedicated to the implementation of e-learning systems success without any limitations
as there is high labor turnover when it comes to the technical staff involved in supporting the e-
learning system (AAU-EEFG-I-1).

In-depth personal interviews with ICT directors also indicated: The e-learning policy should
clearly involve incentives to be provided in financial or non-financial forms (AAU-ICTD). I do
understand that developing e-learning content would take lots of efforts and consumes
instructors teaching and research time that is why incentive is critical to bring about
commitment with the development effort of e-learning contents (JU-ICTD). Considering the
positive impact of motivation in e-learning systems success, the University‟s top management is
trying to introduce incentives for instructors while developing e-learning contents in teaching
their respective courses (ECSU-ICTD).

103| P a g e
Our document reviews as obtained from the concerned offices of institutions under study also
indicates the importance of motivational schemes to relevant e-learning professionals involved in
the development and implementation of e-learning. There are several document proposals
forwarded by the concerned e-learning stakeholders (instructors, e-learning experts and others) to
the institutions‘ top management (i.e. EHEIs under study) to consider this critical factor that
determines the success of e-learning systems. We have found that PESC Information Systems
College set an incentive system to instructors involved in e-learning course developments being
paid some lump-sum amount and a monthly specified amount while running the e-learning
course. According to a report by the e-learning center of the college, this motivational scheme
has resulted as one of the determinant factor in enabling the smooth implementation of e-learning
systems that is expressed in terms of sustained usages and satisfaction of instructors to use the
LMS while delivering their respective courses (PESC-ADC1). Similarly, Adama Science and
Technology University‘s e-learning center‘s quarterly report shows that motivation (financial
and non-financial terms) to instructors is one of the determinant factors towards the successful
implementation of e-learning systems success (ASTU-ADC1). The same incentive plan is
planned to be introduced at ECSU to instructors supporting their courses via e-learning systems;
however, the policy document is not yet approved for implementation (ECSU-EDC1).

Data analysis results from within and across cases shows that introducing motivational schemes
plays significant role in the success of e-learning systems.

4.2.1.5. Computer Training


This lower-level category is also cited as one of the most cited determinant factors of e-learning
systems success receiving 96 references from 27 data sources (see Fig. 23).

104| P a g e
Fig. 23 NVivo‘s Data Analysis Results of Institutional Support Quality: Computer Training
(Source: NVivo‘s Output, 2018)

E-learning is dependent on a set of computer and IT literacy skills. This is evident from various
data sources from within and across cases. Students from the focus group discussions made the
following remarks: Most of us do not have strong computer knowledge to use the system; due to
this problem we tend not to appreciate the usefulness of the system (ECSU-SFG-III-1). Every
one of us wants to be familiar with variety of computer applications, especially those of us
coming from non-IT disciplines (HRU-SFG-I-3). There must be regular training in the use of e-
learning software for learning (MU-SFG-I-2).

Instructors under the focus group discussions mentioned: I have sent an e-mail to my colleagues
to upload their materials on Google drive and so that we can share whatsoever materials we
have in common but no one was able to respond...the reason was not because they were not
willing to share but it is because they were not familiar on how to upload and share through
Google drive and they kept silent…You can imagine how these instructors would perceive the
application of e-learning system with such low computer self-efficacy related issues….Therefore,
I recommend that there must be periodic University-wide faculty training on relevant e-learning
software programs (ASTU-ISFG-II-1). Some of us have techno phobias which makes e-learning
to be impractical…for this reason, there must be consistent training on computer and other e-
learning software technologies (ECSU-ISFG-I-3).

105| P a g e
Focus group e-learning experts also made the following remarks: ICT training specific to the
application of e-learning systems is vital to both students and instructors (HRU-EEFG-I-1). The
University needs to provide computer training to students in their first semesters before use of e-
learning systems and so that use of e-learning systems would not be as such difficult (JU-EEFG-
I-1).

In-depth personal interviews with ICT directors also indicated: Trainings related to e-learning
policies and strong top management commitment and direction of the University to create a link
with the performances of the instructor in terms of integrating e-learning systems in teaching
would make the instructor to believe that e-learning systems is a useful teaching tool (JU-ICTD).
Learning and teaching culture via e-learning as well as lack of e-literacy and commitment is the
biggest threat for the implementation of e-learning in the university (AAU-ICTD).

Document reviews also shows that the low-level of IT skills and IT navy requires creating
awareness and consistent training in e-learning; universities shouldn‘t just only invest in
technologies alone (HRU-ADC1). The University must also invest in an ongoing staff
professional development such as online technologies to be integrated with the course
curriculums, training and support services provided to students and instructors (ASTU-ADC1).
Instructors have more resources available through technology than ever before, but some have
not yet received sufficient training in the effective use of the technology (i.e. e-learning) to
enhance learning, improve the preparation of Instructors in the use of technology, provide
regular awareness and training programs to increase the perception and skill gap of using IT
applications in the teaching and learning process; the issue of regular awareness and training
programs are determinant factors to the success of e-learning systems (MU-ADC1).

Accordingly, the data analysis results from within and across cases indicated that providing
computer training is one of the most important factors that EHEIs need to give due attention in
order to sustainably use e-learning systems in the teaching and learning process.

106| P a g e
4.2.1.6. Quality Assurance Mechanisms

This lower level category received 29 references from 18 data sources (see Fig. 24).

Fig. 24 NVivo‘s Data Analysis Results of Institutional Support Quality: Quality Assurance Mechanisms
(Source: NVivo‘s Output, 2018)

As any academic activities of higher education institutions, the performance of e-learning


systems in the teaching and learning process must be evaluated in terms of its planned
performance against actual e-learning systems outcome. Collecting feedback from users of the e-
learning systems about the effectiveness of the system on periodic timely basis is vital in order to
make corrective actions for any deviations and, ultimately, maintain the sustained usages of the
system. This has been evident from multiple data sources of the empirical findings of the study
as being one of the determinant factors of e-learning systems success. Students from the focus
group discussions made the following remarks: There must be an ongoing evaluation to measure
whether the implementation of e-learning systems has brought the desired benefits in relation to
its intended benefits (MU-SFG-II-1). Just like evaluation is done for instructors‟ performances
in the traditional teaching and learning modality, there must be a system to evaluate courses
conducted via e-learning systems (HRU-SFG-II-1). In our University, it is a good thing to have
e-learning systems but we do not see the University as evaluating the current status of the system
in relation to the intended e-learning systems outcome (ECSU-SFG-III-2). Improving the quality
of e-learning systems by continually measuring and evaluating the performances of e-learning

107| P a g e
system is critical in order to sustainably use e-learning systems in the teaching and learning
process (AMU-SFG-I-2).

Similarly, instructors under the focus group discussions mentioned: Unless or otherwise e-
learning policy on the implementation aspect, serving as a standard of measurement and
evaluation, is not well developed and approved by the concerned office, it would be difficult to
assure its smooth implementations (HRU-ISFG-I-3). The e-learning policy should clearly state
quality assurance mechanisms, implementations, accountabilities and responsibilities by
organizing the responsible office (JU-ISFG-I-2).

Focus group e-learning experts also mentioned: Periodic evaluation of the degree of IT support
and training with regard to the e-learning system whether it achieves the desired objective is the
most significant factor for e-learning systems success (HRU-EEFG-II-4). Even no one asks
report about the progress of e-learning systems and its implementation….it is assumed that the
availability of the technology by itself is enough for the successful implementation and sustained
usages of e-learning systems….(MU-EEFG-I-1).

In-depth personal interviews with ICT directors also show: a committed University‟s top
management that follows-up and controls by developing appropriate e-learning policy as is the
case with the BPR implementation is vital to its continued use and success (AMU-ICTD).

Evidences from document reviews such as e-learning annual reports from the institutions under
study also shows: The availability of regular quality assurance mechanisms to evaluate the
effectiveness of e-learning for teaching and learning is useful to collect feedback about the status
of e-learning and take corrective actions that would make e-learning systems to be sustainably
used. These factors will have positive effect in achieving the goals of all e-learning stakeholders.
The bottle neck is lack of institutional leadership commitment (ASTU-EDC1).

Summary of the data analysis result shows that measuring e-learning systems by setting quality
assurance mechanisms is vital to e-learning systems success in order to maintain the quality of e-
learning as it meets its desired benefits and assure the realization of its expected benefits (i.e. to
realize the investments outlaid on the system).

108| P a g e
4.2.1.7. Intellectual Property Protection

This lower level category received 23 references from 14 data sources (See Fig.25).

Fig. 25 NVivo‘s Data Analysis Results of Institutional Support Quality: Intellectual Property Protection
(Source: NVivo‘s output, 2018)

Developing teaching materials require investing lots of time and professional commitments. To
this end, instructors require their teaching materials to be protected from being copied and used
by unintended users. This is evidenced by various data sources. Instructors under the focus group
discussions made the following remarks: The issue of protecting our course lecture materials
(copy right issues) is critical… First of all, we are not motivated enough to develop and upload
our e-learning contents. Second of all, there is no system that could safeguard our digital
contents (i.e. having only passwords is not sufficient enough to protect our e-learning contents);
everything on the web is highly vulnerable to be copied (ASTU-ISFG-I-2). There must be a
system to protect our e-content materials (AAU-ISFG-I-2). It takes much time to prepare e-
learning lectures...but what if someone downloads it and uses it without permission; my
ownership of the e-course content would be at risk (ECSU-ISFG-III-2). The institution should
devise a clear system to protect our copyright issues with regard to our e-learning lecture
materials (HRU-ISFG-III-1).

Focus group e-learning experts also mentioned: availability of IPRs (Intellectual property rights)
of online courses designed by subject matter experts and maintaining copy right issues of the
instructor for his/her e-contents are vital issues for the sustained implementation of the e-
learning system (ASTU-EEFG-I-1). Protecting instructors‟ lecture materials and all other

109| P a g e
facilities in terms of administration and technology issues must be fulfilled in advance (AAU-
EEFG-I-1).

Similarly, In-depth personal interviews with ICT directors show: Copy right issues of instructors
are one of the most important factors for instructors to get the system to be sustainably used for
teaching. The issue of ownership to e-learning contents needs to be paid due attention.
Instructors have risk concerns of losing their course materials while their teaching materials are
uploaded on the learning management system (AAU-ICTD). If instructors are not secured
enough with regard to the ownership title of their teaching materials, they tend to abstain
themselves from uploading materials that would have helped students understand the course
easily (BDU-ICTD).

Document reviews also shows the significance of protecting e-learning contents of instructors
either by making the LMS more secure towards use of the e-learning contents or providing
incentives (such as transferring ownership titles to the University backed-up by financial
incentives) to encourage instructors to upload their teaching materials without any reservations
(ASTU-ADC1;PESC-ADC2; and HRU-ADC1).

Summary of the overall data analysis shows that devising a system of protecting teaching
materials of instructors determines the success of e-learning systems success, quality assurance
mechanisms to control the systems implementation and performance, instructors‘ and e-learning
experts‘ motivations, developing and implementing e-learning policy, availing e-learning
infrastructure, providing consistent technical support, and computer training are elements that
users of the system expect from the institution for successful implementations and sustained
systems usages. Accordingly, the empirical data analysis shows that institutional support quality
(i.e. being a higher level category) is one of the determinants of e-learning systems success.

4.2.2. User Satisfaction

The study analysis resulted in the generation of lower-level categories: Enjoyable Experience
and Overall Satisfaction. These sub-categories were further clustered in to a higher level
category which is termed as User Satisfaction. Table 16 presents summary of the definitions of
the lower-level categories and their respective higher level-category as well as representative in-

110| P a g e
depth personal interviews/ focus group discussion and other logs taken from several in-depth
personal interviews, focus group discussions, archival evidences and observation notes.

Table 16: User Satisfaction Factor


Construct or Higher Lower-level categories Representative logs Total
level-category number of
similar logs
“When we get our exam results
Enjoyable experience: it is the users
and register anytime and
emotional experience of the system
anywhere even if we are away 365
(DeLone & McLean,2003; empirical
from the University, it is really
data)
an enjoyable experience”.
User Satisfaction: the
“whenever we get prompt
overall satisfaction or
response from our instructors
likeability of an IS and
for questions and concerns we
its output (Petter &
have over the course, get
McLean, 2009; DeLone
Overall user satisfaction: the overall instant feedback for our exams,
and McLean, 2003)
users judgment of using the systems in get all relevant learning
relation to an overall expectations materials as supported by 807
(DeLone and McLean, 2003; and diagrams, videos and other
empirical data). learning aids and interact with
our instructors and also with
other students via the LMS, it
really gets us satisfied for using
the system”.

This higher-level level category received a total of 1172 references within and across cases
analysis from various data sources of several focus group discussions, in-depth personal
interviews, observation notes and document reviews. It is one of the most cited constructs/higher
level-categories among all the factors identified determining the success of e-learning systems
(see Figure 26). Analysis of its lower lower-level categories shows that a total of 807 references
(out of 1172 total references of the higher level category) were made to Overall user

111| P a g e
satisfactions from 26 data sources. On the other hand, the other lower level category
“Enjoyable experience” received 365 references from 15 data sources.

Fig. 26 NVivo‘s Data Analysis Results of User Satisfaction Determining E-learning Systems Success
(Source: NVivo‘s Output, 2018)

4.2.2.1. Enjoyable Experience


This lower level category received 365 references from 26 data sources (see Figure 27).

Fig. 27 NVivo‘s Data Analysis Results of User Satisfaction: Enjoyable Experience (Source: NVivo‘s
Output, 2018)

Users feeling of enjoyment/ positive emotional experience towards use of the e-learning system
is vital to the continual use of the system. This is evidenced from various data sources. Students
from the focus group discussions made the following remarks: The values of e-learning systems
are very great to broaden our adaptation to IT applications which will have positive impact to
our future career as well (HRU-SFG-III-2). Learning through this type of modern learning
method gives us lots of pleasure (ECSU-SFG-IV-2). When we got our exam results and register

112| P a g e
anytime and anywhere even if we are away from the University, it really gets us enjoy using the
system (AMU-SFG-II-3). I remember one of our instructors was committed to get us online and
respond to our questions on time, post learning materials, assignments, and get us immediate
feedback over the Learning Management System (LMS)...That course was very interesting in
spite of problems such as poor internet connections and some of the students did not have even
laptops or smart phones and also adequate IT skills to use the system (PESC-SFG-II-3).

Instructors under the focus group discussions mentioned: teaching through e-learning is a
modern way of accessing your students anytime from anywhere that is very enjoyable way of
teaching (PESC-ISFG-I-2). You do not need to travel and be in office to consult your students…
It is very enjoyable experience (MU-ISFG-I-2). Getting your exams marked instantly gives you
an enjoyable teaching experience as marking exams do consume lots of instructors‟ time in the
traditional way of teaching (HRU-ISFG-II-1). Announcing course related information to
students in an instant way gives you another enjoyable teaching experience (ECSU-ISFG-I-3).

Focus group e-learning experts/ICT experts also mentioned: While students use the e-learning
system by themselves after you give them some orientation and training, it gives you a sense of
enjoyment while helping students to acquire what they want via the LMS in relation to their
respective courses (AMU-EEFG-I-3).

In-depth personal interviews with ICT directors also showed: Students would be happy to use the
system as far as they get use of the system is useful and satisfied with it for their learning in a
way that will result in continually using the system (AAU-ICTD). If support service is good, both
instructors and students would be encouraged to use the e-learning system, thereby leading to
satisfaction (AMU-ICTD).

Document reviews similarly indicated: e-Instructors teaching via the e-learning portal have
nothing to bother, nothing to carry all possible piles of papers-duplicated course outlines,
duplicated examinations, books, nothing to worry about the security of examinations, elapse time
for checking examinations, and taking attendances of students. All such and many more
activities provided online, and the LMS automatically corrects the examinations and Instructors
have the tendency to enjoy free relaxing which determines the success of e-learning systems
(ASTU-EDC1; PESC-EDC1).

113| P a g e
Summary of the data analysis from all the data sources shows that students and instructors
feeling of happiness or enjoyable experience with the use of the e-learning system derive them to
continually use the system. However, such enjoyable experience is a function of various factors
such as consistent institutional support, instructors‘ and students‘ attitudes and motivation
towards using the system and e-learning systems quality and e-learning contents. Consequently,
it can be inferred that students‘ and instructors‘ enjoyable experiences determine the success of
e-learning systems.

4.2.2.2. Overall Satisfaction


This lower level category received 807 references from 26 data sources (see Figure 28).

Fig. 28 NVivo‘s Data Analysis Results of User Satisfaction: Overall User Satisfaction (Source: NVivo‘s
output, 2018)

Students‘ overall satisfaction level determines the level of use of the e-learning system. This is
evidenced from all the data sources. Students from the focus group discussions mentioned: Some
of us are not comfortable to ask in class due to feeling of shyness and we do not ask our
instructors even if we didn‟t understand the course… In most of the cases, it appears in exam but
if we have e-learning system which promotes discussion and live chat with our instructors for
concepts we didn‟t understand in class, we will be highly contented to continually use the system
(AMU-SFG-II-1). When we ask and get answers promptly from our instructors via the e-
learning systems, there is no question about getting us satisfied and continue using the system

114| P a g e
(ECSU-SFG-IV-2). If we get all the e-learning contents including assignments, exams,
registration and all e-learning features in place, we would have been be much happier and
satisfied with the use of the system (ASTU-SFG-III-2). If instructors and technical personnel
support us whenever we encountered problems, we will be encouraged to use the system and its
usefulness will get us satisfied to continually using it (PESC-SFG-III-1). I remember that when
we were having exams online, a network fails and the exam was interrupted and as a result we
took the exam again which was really annoying and cursed the system wishing that we had the
exam in the traditional modality (MU-SFG-II-3). Our instructors do not usually give us prompt
feedback on our assignments or tests but through e-learning we can immediately know our result
which has great value to know the feedback on time and so that we can learn from our mistakes
for better academic performance (AMU-SFG-I-1). If it delays in responding to our requests like
in downloading and accessing learning materials, it would be discouraging to use and its
importance will be diminished as a result dissatisfying us to use the system further (PESC-SFG-
III-3). When instructors promptly respond and share their knowledge through various ways such
as online videos, online quizzes, additional learning materials and so on…at any time from any
place, it gives us lots of pleasure and satisfaction for continual use of the system (ASTU-SFG-
II).

Similarly, instructors under the focus group discussions mentioned: We need to have dedicated
IT equipments, IT support staff and an LMS to easily implement e-learning courses at any time
that results in getting the system useful and to be satisfied with it (MU-ISFG-III-3). We will use
the system more frequently if by use of the system, it saves our time, better understand the needs
of the students and reply to the students‟ requests in a timely manner; and get satisfied with the
system with all the support and useful features of the system (HRU-ISFG-II-4). All the e-learning
infrastructural issues must be ready to be used without any difficulties such as making computer
labs (dedicated to e-learning) accessible to students and instructors, have all the facilities such
as video recordings studios and setting standard e-learning templates for instructors , recruit
and assign Instructional Designers to help instructors on how to integrate the e-learning
contents from the pedagogical point of views are useful and satisfying factors for the successful
implementation of e-learning systems (ASTU-ISFG-III-7). The availability of high speed internet
and smooth network with high compatibility of the Learning Management Systems would
motivate us to feel that the system is useful in delivering our courses and ultimately be satisfied

115| P a g e
with it (PESC-ISFG-II-5). University supports with respect to all other technical and
pedagogical issues are satisfying factors apart from the technical feature of the system (AAU-
ISFG-I-4).

Focus group e-learning experts also mentioned the level of the overall satisfaction of students
and instructors experience with the system as determinant factor for e-learning systems success:
There must be an attractive Learning Management System (LMS) interface that performs various
interactive teaching and learning modalities for the system to be useful in the perception of users
of e-learning (PESC-EEFG-I-5). If instructors are providing all the required course support to
their students through the e-learning system, students will feel satisfied for use of the system
(MU-EEFG-II-2). All the necessary institutional support must be in place in order for the
students to get them feel that the e-learning system is useful deriving them to be satisfied with it
(ECSU-EEFG-II-3). E-learning contents must be clear and easy to understand as per the course
syllabus this will result in improving the performance of the users and enhances satisfaction
(ASTU-EEFG-II-1). There must be coordinated activities that makes e-learning to be useful in a
way that would enhance users satisfaction in continually using the system. Timely responsive IT
staff to help students interact with the system add value in enhancing the satisfaction level of
students and instructors (AMU-EEFG-II-1). Factors related to availability of computer labs that
is whenever students wanted to use the system, positive attitudes of instructors and regular
instructor‟s support to students to use the system determines the overall users‟ satisfaction with
the system (JU-EEFG-I-2). Availability of various plug-ins features such as link to social
networking, registrar, online grade access and access to IT resources would make students to be
satisfied with the system (PESC-EEFG-II-3).

In-depth personal interviews with ICT directors also show: Instructors unreserved support to
their students is one of the satisfying factors to students and ultimately e-learning systems
success (JU-ICTD). The satisfaction of e-learning systems to instructors and students is a
function of Institutional support quality which can be expressed in terms of ICT staff support,
and assigning the right ICT staff, making available all ICT required facilities, and training
instructors and students (AMU-ICTD). Once instructors are satisfied with the system having all
the institutional support, technological support, training provided to instructors, and
pedagogical support with regard to e-learning course, instructors will be in a better position to

116| P a g e
continually use the system as it will improve their teaching performance which will ultimately
increase the academic performance of students which might even be better than the traditional
learning system (BDU-ICTD).

Document reviews from the institutions under the study also indicated: students and instructors
are dissatisfied by the level of support services such as inadequate internet connection,
responsive instructors, consistent IT training and outdated e-learning contents (ASTU-EDC1).
Lack of instructors‘ motivation to use the system resulted in low-level of satisfaction and use of
e-learning systems to support the teaching and learning process (ECSU-EDC1).

Summary of the overall data analysis result shows that Enjoyable Experience and Overall Users‘
Satisfaction towards the system is a function of various factors such as institutional support, e-
learning systems quality, e-learning content quality, learners‘ e-learning factor, and instructors‘
e-learning factor. User Satisfaction is dependent on these determinant factors. To this end, it can
be understood from the analysis that user satisfaction is one of the determinant factors of e-
learning systems success.

4.2.3. Systems Usage

The study analysis resulted in the generation of two types of lower-level categories: Frequency
of use and Dependability. These sub-categories were further conceptualized in to a higher level
category which is termed as Systems Usage. Table 17 presents summaries of definitions of the
lower-level categories and the respective higher level-category and representative logs from the
multiple data sources of the study.

117| P a g e
Table 17: Systems Usage Factor

Construct or Higher Lower-level categories Representative logs Total number of


level-category similar logs
“If Computers are available to use, all
our course materials can be found
online, if we can take exams online and
Frequency of use: it
get instant feedback with our exam
explains the usage rate of 888
results, download learning materials
using the e-learning system
instantly, and get our assignments
(DeLone & McLean,2003;
online would make e-learning to be
and empirical data) used continuously as a result of
satisfaction with the system”.
“The availability of high speed
Systems Usage: it
internet and smooth network with high
explains the degree of
compatibility of the learning
use of the e-learning
management systems, getting all the
systems output (Petter
institutional support and have
& McLean, 2009;
incentives depending on the use of the 303
DeLone and McLean,
Dependability: users system would get instructors to be
2003; empirical data)
feeling of confidence to happy with the use of the systems and
use the e-learning system continually use it”.
as one of the prime “Students will depend on the system

teaching and learning for learning whenever the system is


always available, responsive, get
modality (Wang et al.,
relevant e-learning contents, up-to-
2007).
date learning materials and responsive
technical staff upon service request
while facing some difficulties with the
system”.

118| P a g e
The Systems Usage higher-level category received a total of 1191 references from within and
across cases analysis from 30 various data sources. It is the most cited construct among all the
factors identified determining the success of e-learning systems (see Figure 29).

Fig. 29 NVivo‘s Data Analysis Results of Systems Usage Factor as Determining E-learning
Systems Success (Source: NVivo‘s output, 2018)

4.2.3.1. Frequency of Use and Dependability

Analysis of its lower lower-level categories shows that a total of 888 references were made to
Frequency of use from 30 data sources and 303 references were made to Dependability from
14 different data sources (see Fig 28).

Fig. 30 NVivo‘s Data Analysis Results of Systems Usage: Frequency of Use and Dependability Lower-
Level Categories (Source: NVivo‘s Output, 2018)

119| P a g e
Frequent use of e-learning systems by students and instructors in the teaching and learning
process determines the success of e-learning systems. The more users are satisfied with use of
the systems, the more they frequently use and depend on the system. This is evidenced from
various data sources of the study. Students from the focus group discussions made the following
remarks indicating that both frequency of use and dependability on the system determines the e-
learning systems success: We want our technical problems associated with the use of the system
to be solved in a prompt manner for continual use of the system (ECSU-SFG-IV-6). The
availability of Useful systems features which can facilitate our learning enhances our frequency
of use of the system (ASTU-SFG-III-3). If there is no network, we will be unable to submit our
assignments on time or even contact our advisors which discourage our level of use and depend
on the system (AMU-SFG-II-2). The University should make available all the hardware and
software issues if the system is intended to be used more frequently and depend on it in the
teaching and learning process (HRU-SFG-III-6). If all instructors use e-learning while they
teach us, it will motivate us to use the system in more frequently, but this is not the case in our
University (AMU-SFG-II-1). When we get what we want from the course over the LMS in a very
shorter period of time without any network and internet problem, it will make us satisfied to use
and depend on the system because it saves our time better than the normal class time, it also
improves our academic performance by saving our time than searching over other search
engines such as Google or physically accessing the learning materials from the library (as in
most of the cases, the books that are available in the library are few in numbers) that will drive
us to use the system more frequently (MU-SFG-II-4). Getting instant answers for questions we
posed to our instructors over the e-learning systems will also drive us to use the system in more
frequent manner (ECSU-SFG-I-2). We will continually use e-learning systems with great
satisfaction as far as we have all the support from the instructors to use it (ASTU-SFG-III-4).
Whenever adequate computers are available to use, all our course materials can be found
online, taking exams online, downloading learning materials instantly, and getting our
assignments online would make e-learning to be used continuously (AMU-SFG-I-3). Full access
of the system to learning materials related to our courses supported with text, diagrams, videos
and other learning aids are useful factors in order to continuously use the system (PESC-SFG-
III-6).

120| P a g e
Instructors under the focus group discussions mentioned: The issue of protecting our course
lecture materials (copy right issues) is critical for our frequent use of the system by updating our
teaching materials (HRU-ISFG-I-2). Ease of use to the system, user friendly LMS, and all other
plug-in functions of the e-learning system make the use of e-learning systems important to frequently
use it for teaching (ECSU-ISFG-III-2). Technical support to instructors, uninterrupted internet
connection, guaranteed supply of electric power, availability of computers to students and
instructors are all important factors for the system to be frequently used and depend on it (ASTU-
ISFG-II-1). Availability of responsive ICT staff for e-learning systems use, availability of labs for
instructors and students, incentives to instructors, compatibility of the LMS with many other devices,
availability of intellectual property rights of online courses designed by instructors, availability of e-
studio, and regular ICT trainings to all incoming instructors are all contributing to instructor‟s
satisfaction of e- learning system and more frequent use of the system (HRU-ISFG-III-4). The
availability of all hardware and software support resources, providing relevant training programs,
institutional support, and ICT support team in all areas of e-learning systems including its usability
of the system would make e-learning to be useful for both students and instructors for frequent use of
the system (JU-ISFG-I-2).

Focus group e-learning experts also indicated: For frequent use of the system, ICT staff and other
technical personnel should always be ready to help students while students and instructors
encountered problems with use of the system (MU-EEFG-I-2). The quality of e-course content
delivered through the learning management system is critical to get students understand the
course online and associate with the course achievement in order to get them continually use it
(PESC-EEFG-II-2). The system must have valuable content, concerning the defined needs of the
students for frequent use of the system (JU-EEFG-I-3). The course contents delivered through the
learning management system must include feeling of personalization and so that students will not
feel that they are alone. This will also greatly enhance the frequent use of the system (MU-
EEFG-II-3).

In-depth personal interviews with ICT directors also mentioned: The LMS must always be
updated with recent versions in order to make it compatible with other devices even to students‟
smart phones in a way that will motivate them to frequently use the system wherever they are.
Students and instructors have the problem of adopting with computer‟s use (AMU-ICTD). Lack
of technical support, insufficient availability of hardware and software including access to high

121| P a g e
bandwidth internet, lack of adequate e-learning lab unit and dedicated IT technicians makes e-
learning to be difficult to be sustainably implemented and enhance its frequent use among
faculties and students (JU-ICTD)

Document reviews also show that one of the great challenges of students to frequently use e-
learning is their low level of computer skills. These would be solved through consistent training
and availing all IT resources that would support the smooth implementations and sustained
usages of e-learning systems. For instance, e-learning platform provides students with various
alternative teaching materials for better conceptualizations of theories and principles that would
motivate students for frequent use of the system (ASTU-EDC1). Furthermore, additional
materials such as e-journals, e-Articles, e-books, videos, and audios on various topics of the
subject matter also enhanced the frequent use of the system. Students who are formally
registered for the course delivered via e-Teaching portal can access 24/7 all the possible teaching
materials from anywhere within the premises of ASTU. In other words, they will not wait till
instructors give them a handout or wait for queues in the library to get books. All the hectic
possible hurdles are eradicated and even they can download such materials to external drives.
Whatever the student does can be seen and verified at any point in time and there will not be any
room for instructors reflect their subjectivism towards students as tests, assignments and
examinations are there online. These factors contribute towards frequent use of the University‘s
e-learning system (ASTU-EDC1).

Summary of analysis of the empirical findings shows that instructors and students would
frequently use e-learning systems and depend on it as far as they are satisfied with the level of
institutional support quality, e-learning systems quality , e-learning content quality, students‘ and
instructors‘ e-learning factor and their prior knowledge with computers. In general, systems
usage is one of the determinant factors of e-learning systems success being dependent on the
overall satisfaction of students and instructors; the University‘s top management commitment as
expressed in terms of availing all the required e-learning resources, and consistent computer
training programs, ease of use of the LMS and its systems useful features as it satisfies students
and instructors needs all determine the degree of systems usage.

122| P a g e
4.2.4. E-learning Systems Quality

This higher level category is also one of the most cited factors determining students and
instructors user satisfaction and sustained systems usages. The study analysis resulted in the
generation of lower-level categories: User friendly, Systems useful features, Systems ease of use,
Systems availability, Systems response, Secure and System Interactivity. These sub-categories
were further clustered in to a higher level category which is termed as E-learning systems
quality. Table 18 presents a summary of the definitions of the lower-level categories and their
respective higher level-category as well as representative logs across multiple data sources of the
study.

123| P a g e
Table 18: E-learning Systems Quality

Construct or Lower-level categories Representative logs Total


Higher level- number
category of similar
logs
User friendly: refers to system feature “The course contents delivered
that would make students and instructors through the learning management
to easily and freely interact as if they are system must include feeling of 23
in the traditional classroom setting (Selim, personalization”.
2007; Khan,2005; empirical data)
”Students want an e-learning system
that provides online exams,
Systems useful feature: it represents
discussion forums, available links
system feature containing all the useful
E-learning with social medias, calendars and so 96
plug-ins functions to enhance the usability
Systems Quality: on and this must always be in use
of a system (Khan, 2005;empirical data)
refers to user whenever we want it to use it
judgment about e- anytime from anywhere”.
learning systems Systems availability: functionality of the
performance “The system needs to be available
system 24/7 while instructors and students
characteristics all the time without difficulties even
want to interact or use for their specific 45
(Chiu et al., 2007; to run online classes at anytime from
needs (Bhuasiri et al. , 2012; Wang et al,
Khan, 2005; anywhere”.
2007; Khan, 2005; empirical data)
empirical data) Systems ease of use: refers to the degree
“Make the LMS to be easily usable
to which the user expects the e-learning
and understandable by both students 42
system to be free of effort (Bhuasiri et al.,
and instructor”.
2012; empirical data)
―We need to instantly download and
Systems response: The time that elapses
also attend online lectures, get
from a user action until feedback is
instant response for questions posed 46
received from the system (Bhuasiri et al.,
to our instructors in order to use the
2012; empirical data)
system continually”.
“Communicate online with others
and interact in the discussion board
Systems Interactivity: The degree or
of the e-learning systems and engage
level a system allowing for interaction 35
in group discussions in a way that
(Bhuasiri et al. , 2012)
would enhance my understanding
with the course”.
-“There must be a system to protect
our e-content materials. It takes
much time to prepare e-learning
Secure: The characteristics of the system
lectures. But what if someone
that protects the user‘s personal
downloads it and uses it. My
information and e-learning contents from 17
ownership of the e-course content
unauthorized users (Khan, 2005;
would be at risk”.
empirical data)
-” The system need to protect our
grades and online submitted
assignments”

124| P a g e
This higher level category received a total of 304 references from within and across cases analysis from
27 data sources of several focus group discussions, in-depth personal interviews, document reviews and
observation notes. As can be seen in Figure 31 & Figure 32, analysis of its lower lower-level categories
shows that a total of 96 references were made to Systems Useful features from 26 data sources; 46
references were made to Systems response from 19 data sources; 45 references were made to Systems
availability from 17 data sources; 42 references were made to Systems ease of use from 18 data sources;
35 references were made to Systems interactivity from 16 data sources; 23 references were made to User
friendly from 11 data sources; 17 references were made to security from 10 data sources;.

Fig. 31 NVivo‘s Data Analysis Results of E-learning Systems Quality Factor as Determining E-learning
Systems Success (Source: NVivo‘s output, 2018)

4.2.4.1. User Friendly


This lower level category received 23 references from 11 data sources from within and across
cases analysis (see, Figure 32).

Fig. 32 NVivo‘s Data Analysis Results of E-learning Systems Quality Factor: User Friendly (Source:
NVivo‘s Output, 2018)

125| P a g e
User friendly feature of e-learning systems is one of the determinant factors of e-learning
systems success. This is evident from various data sources. Students from the focus group
discussions made the following remarks indicating that user friendly feature of the system is one
of the determinant factors of e-learning systems success: We would see e-learning being useful if
it gives personalized service for the student while interacting with his/her instructor and
classmates over the e-learning portal (PESC-SFG-IV-3).

Similarly, instructors under the focus group discussions also mentioned: User friendly LMS
motivates instructors for further use of the LMS as one of the teaching modality enhancing
instructor‟s satisfaction for frequent use of the system (HRU-ISFG-II-2).

Focus group e-learning experts also indicated: The course contents delivered through the
learning management system must include feeling of personalization (AAU-EEFG-I-1). Poor
usability (navigation) problems of e-learning interfaces are one of the problems influencing the
success of e-learning systems (PESC-EEFG-I-4). If the interface of the e-learning systems/LMS
is not user friendly, it will lead the user towards frustration and lack of interest in using the
system (MU-EEFG-II-3). When users are using the system smoothly with no difficulty and their
experiences with use of the system is satisfactory, the continual use of the system as a result of
their satisfaction with the system would be enhanced (JU-EEFG-I-1).

Similar views were forwarded from in-depth personal interviews with ICT directors of the
institutions under study.

4.2.4.2. Systems Useful Feature

This lower level category received 96 references from 26 data sources from within and across
cases analysis (see Fig. 33).

126| P a g e
Fig. 33 NVivo‘s Data Analysis Results of E-learning Systems Quality: Systems Useful Feature (Source:
NVivo‘s output, 2018)

One of the determinant factors of e-learning systems quality is the systems useful features that
students and instructors perceive it to possess as it meets their expectations. This is evidenced
from various data sources. Students from the focus group discussions mentioned: The LMS helps
me to get all the course materials and other academic announcements or events such as
registration and exam dates and additional materials in one place (i.e. in the LMS) than
physically going here and there to the library or even whenever I missed important information
while I could not attend my class (MU-SFG-II-1). Had there been video lectures on my course
work that would make the system to be much important to use all the time saving my time and
able to use it so often (ECSU-SFG-I-1). The e-learning systems with its interactivity feature and
other links to social medias makes you socialized as it gives you the opportunity to discuss with a
larger group of people even if we do not know each other and have access to various range of
ideas and opinions (AMU-SFG-I-3). Communicate online with others and interact in the
discussion board of the e-learning systems and engage in group discussions that would enhance
my understanding with the course and therefore, I will frequently use it as a result of being
satisfied with the system (HRU-SFG-III-1). When it increases interaction between students and
instructors through the discussion board, chat room, and use of e-mails, theses all are useful
features of the system determining the frequent use of the system being the result of my
satisfaction with it (PESC-SFG-I-2).

127| P a g e
Similarly, instructors under the focus group discussions made the following remarks: when we
easily access our students for class announcements, take online quizzes, online examinations,
receive instant grade notifications, upload our teaching materials and assignments, conducting
live chat with our students, have discussion forums, and social networking links, it will enhance
our satisfaction with the system that determines our continual use of the system (MU-ISFG-I-1).
Augmenting with various useful functions such as videos, audio, text, simulations etc … and
providing continuous assessments, quizzes and sample exams via the e-learning system would
result in enhancing our satisfaction with the system which will ultimately determines our
continual systems usage (JU-ISFG-I-2).

Focus group e-learning experts also mentioned: when the system integrates various useful
teaching tools such as in our University we have e-Teaching portal that keeps on recording
every activity undertaken by every formally registered students. This type of systems useful
feature reduces instructors‟ hurdles towards monitoring students‟ regular use of the system
(ASTU-EEFG-II-1). Who is doing what from which location and at what time, all are recorded
and e-instructors has ready-made records for further analysis and decisions (AMU-EEFG-I-1).

In-depth personal interviews with ICT directors also indicated: Full access (24/7) to all learning
materials with links provided to navigate through the learning management system are useful
factors for the system success (AMU-ICTD). Some systems limitations such as the learning
management system limits file sizes to upload which is difficult to integrate multimedia files.
Consequently, it negatively affects the systems usage of e-learning users (HRU-ICTD).

4.2.4.3. Ease of Use

This lower level category received 42 references from 18 data sources from within and across
cases analysis (see Figure 34).

128| P a g e
Fig. 34 NVivo‘s Data Analysis Results of E-learning Systems Quality: Systems Ease of Use (Source:
NVivo‘s output, 2018)

Ease of use of the system is regarded as one of the measures of e-learning systems quality. Users
feelings of ease of use of the system motivates for further use of the system. This has been
evidenced by various data sources. Students from the focus group discussions pointed out: If we
are able to easily access what we want from the LMS such as videos, having online chats, take
online exams, practice with some sample exercise questions and so on, it will be very important
to use it all the time even better or preferred than the traditional classroom setting (AMU-SFG-
I-2). If the system is easy to use and have all the materials we need as per our instructors course
contents, we would be lucky to have it (HRU-SFG-III-2).

Similarly, instructors under the focus group discussions mentioned: Poor usability (navigation)
problems of e-learning interfaces is one of the challenges in the success of e-learning; a learning
management system which is easy to use is always the most preferred one than having many
pages to go through and reach at the desired page (ASTU-SFG-III-4).

Focus group e-learning experts also indicated: Making the learning management system to be
easily usable and understandable by both students and instructors is one of the critical factors
for frequent use of the system (PESC-EEFG-I-2).

In-depth personal interviews with ICT directors also pointed out: Ease of using the LMS, ease to
learn the LMS, having all the e-learning useful features being available on the system, the
accuracy of the system in performing its intended e-learning features, and flexibility in user‟s

129| P a g e
interface to easily adapt to instructors‟/students‟ personal learning approach are critical to its
success (MU-ICTD). An e-learning system that is clear, concise, and free from confusion and
ease to understand, an e-learning content that is updated to reflect the ongoing course syllabus,
an e-learning system that fulfills quality of e-learning contents in terms of writing, images,
videos, style, grammar and knowledge...These all factors play great role for the usefulness and
satisfaction of e-learning systems for students and instructors of the system and its successful
implementation at University wide as well (PESC-ICTD).

4.2.4.4. Systems Availability

This lower level category received 45 references from 17 data sources from within and across
cases analysis (see Figure 35)

Fig. 35 NVivo‘s Data Analysis Results of E-learning Systems Quality: System Availability (Source:
NVivo‘s output, 2018)

The availability of the system whenever students and instructors want to use the system
determines their satisfaction and the further use of the system. This is also evidenced through
various data sources. For instance, students from the focus group discussions pointed out: When
the system provides the platform to ask any questions or clarifications at any time and place and
get immediate feedback with a simple click and download, the usefulness of e-learning systems
becomes much important to continually use the system (AMU-SFG-I-3).

Similar views were also forwarded by focus group instructors, e-learning experts and in-depth
personal interviews made with ICT directors of the institutions under study. Instructors from

130| P a g e
focus group discussions mentioned: Full access (24/7) to all learning materials with up-to-date
course materials must be available on the system, always being functional (PESC-ISFG-I-1).

Similarly, in-depth personal interviews with ICT directors indicated: as far as the University
invested a lot towards the development of e-learning contents, it is the duty of the University‟s IT
directorate to make sure that e-learning contents are always available (BDU-ICTD).

4.2.4.5. Systems Response

This lower level category received 46 references from 19 data sources from within and across
cases analysis (see Figure 36).

Fig. 36 NVivo‘s Data Analysis Results of E-learning Systems Quality: Systems Response (Source:
NVivo‘s output, 2018)

E-learning systems response towards specified learning needs of instructors and students over the
LMS is vital for continuous use and user‘s satisfaction of the LMS. Instructors and students
want a system that is responsive to their specific requests in a manner that would save their time.
This is evidenced through various data sources. Students from the Focus group discussions made
the following remarks: If the LMS delays in responding to our service requests while
downloading and accessing learning materials, it would be discouraging to use and its
importance will be severely diminished (ECSU-SFG-III-1) . When the system gets us to be online
for longer time during downloading and it requires you to sit more hours on the computer which
consumes your time and you know we are regular students and our time is too precious;
instructors give us lots of assignments and therefore, in this scenario we rather shift to the face-

131| P a g e
to-face modality and preferred to go to the library to read books than accessing it through the
online system with such time consuming process (ASTU-SFG-I-2). When we ask and get answers
immediately over the e-learning systems, there is no question about the further use of the system
(HRU-SFG-II-1).

Instructors from focus group discussions also mentioned: The system need to respond in a
reasonable response time with a better internet connection (JU-ISFG-I-3). Easy to load within a
lesser time and navigating across all the useful systems features are critical factors for further
continual use of the system (HRU-ISFG-I-4).Immediate response to our requests to problems
encountered over the learning system and less complexity of the Learning Management system
increases our satisfaction and further continuous use of the system (ECSU-ISFG-III-1).

Important remarks forwarded by focus groups of students and instructors were also forwarded by
e-learning experts focus group discussions and in-depth personal interviews with ICT directors
of the institutions under study with regard to the determinant factor of systems responsiveness on
e-learning systems success.

4.2.4.6. Secure

This lower level category received 17 references from 10 data sources from within and across
cases analysis (see Figure 37).

Fig. 37 NVivo‘s Data Analysis Results of E-learning Systems Quality: Secure


(Source: NVivo‘s output, 2018)

132| P a g e
An e-learning system that ensures the protection of e-learning contents such as students‘ and
instructors‘ personal data from unintended users enhances the trustworthiness of the e-learning
systems. As a result, students and instructors would continually use the system without losing
confidence that may be caused from security risks. This is evidenced by various data sources.

Instructors from focus group discussions pointed out: There must be a system to protect our e-
content materials. It takes much time to prepare e-learning lectures; but what if someone
downloads it and use it (ASTU-ISFG-III-3). The ownership of course materials of instructors
developed for e-learning must be guaranteed of not to be used by others other than designated
users (ECSU-ISFG-II-1). The more the system makes instructors‟ course content and grading
secured, they feel that uploading e-learning materials over the LMS would make them more
confident to use the system (MU-ISFG-II-2).

Students from the focus group discussions also made similar remarks: The system need to protect
our personal data related to our grades and online assessment results (ECSU-SFG-IV-2).

In-depth personal interviews with ICT directors of institutions under the study and e-learning
experts from focus group discussions also indicated that fear of losing instructors‘ course
materials if uploaded on the learning management system is one of the critical factors inhibiting
instructors from continually using the system.

Summary of the data analysis result shows that e-learning systems quality is multi-factorial. User
friendly, Systems useful features, System response, Systems availability, Systems ease of use,
System Interactivity, and Security are the most cited lower level categories or variables of e-
learning systems quality. In general, E-learning systems quality has been cited as one of the
determinants of e-learning systems success in terms of enhancing users‘ satisfaction and systems
usage.

4.2.5. E-learning Content Quality

This higher level category is cited as one of the factors that determine students and instructors
continued systems usage and satisfaction. The study analysis resulted in the generation of lower-
level categories: Course quality, relevant content, and Up-to-date content. These categories were
further clustered in this higher level category. Table 19 presents a summary of the definitions of

133| P a g e
the lower-level categories and the respective higher level-category as well as representative
sample logs.
Table 19: E-learning Content Quality

Total
Construct or Higher number of
Lower-level categories Representative logs
level-category similar
Quotes
Course Quality: represents the quality of “Uploaded materials on the LMS
writing, images, video, or flash to meet provide students with various
generally accepted standards of semantics, alternative teaching materials for
style, grammar, and knowledge (Bhuasiri et better conceptualizations of 43

E-learning content al., 2012; Ozkan & Koseler, 2009; empirical theories and principles supported
quality: refers to the data) with video recorded lectures,
overall quality of diagrams and animations”.
uploaded learning Relevant content: The degree of “Access to previous relevant
materials or congruence between what the learner wants students‟ projects; Additional
available teaching or requires and what is provided by the materials such as e-journals, e- 52
aids as it meets information, course content, and services Articles, e-books, videos, and audis
learners‘ needs via the LMS (Bhuasiri et al, 2012; empirical on various topics of the subject
(Khan, 2005 and data). matter”.
empirical data) Up-to-date: refers to availability of up-to- “In contrast to conventional
date e-learning contents and updated teaching methodology where
important educational resource links to meet instructors at times have no ways to
the course objectives (Khan, 2005; prepare and maintain teaching
empirical data) material and leads to routines in
every semester, e-Teaching portal
provides e-Instructors to prepare, 23

maintain, and retrieve standardize


teaching material, they have the
tendency to update their teaching
materials with passage of time
discarding outdated versions and
including contemporary topics”.

134| P a g e
Accordingly, this higher level category received a total of 118 references from within and across
cases analysis from 25 data sources (see Figure 38).

Fig. 38 NVivo‘s Qualitative Data Analysis Results of E-learning Content Quality (Source: NVivo‘s
output, 2018)

4.2.5.1. Relevant Content, Up-to- Date Content and Course Quality


As can be seen in Figure 39, analysis of its lower lower-level categories shows that a total of 43
references were made to Course quality from 22 data sources; 52 references were made to
Relevant content from 20 data sources and 23 references were made to Up-to-date content
from 14 data sources.

Fig. 39 NVivo‘s Data Analysis Results of E-learning Content Quality Lower-Level Categories (Source:
NVivo‘s output, 2018)

135| P a g e
The data analysis result shows that an e-learning systems having all the relevant content, up-to-
date and meeting the course objective with respect to a certain course are critical factors for
systems usage and also enhancing user satisfactions. Students from various focus group
discussions made the following remarks: We would get e-learning systems useful when the e-
learning contents are available containing all the course contents as per to our instructors‟
course outlines (HRU-SFG-I-2). Availability of relevant video lectures, text materials and
feedback in the e-learning system (MU-SFG-II-3). When we get additional external related and
updated links and resources through the e-learning systems, the system would have been
dependable (PESC-SFG-IV-4). I suppose the most important thing a part from having relevant
materials and e-learning contents are timeliness such as timely up-to-date exercises and projects
uploaded and communicated to students (ECSU-SFG-III-1).

Instructors from focus group discussions also indicated: Availability of up-to-date material of
needed information with regard to the course within the e-learning system is very important for
students as well as for instructors to ensure educational quality within the University (HRU-
ISFG-II-2).

Similarly, e-learning experts from focus group discussions also pointed out: The system if it
contains all the relevant lecture and teaching materials for students, its use will be more
consistent enhancing students‟ satisfaction (PESC-EEFG-III-4).

In-depth personal interviews with ICT directors also indicated: the teaching materials in the
learning management system need to be as per the course description of the course curriculum
to ensure that the teaching materials are uploaded with relevant concepts (PESC-ICTD).

Document reviews also show these lower level categories (i.e. course quality, relevant content
and up-to-date content on the LMS) as determinant factors of systems usage and users
satisfaction revealing: The e-Course uploaded enables e-Instructors to always evaluate and
update their teaching materials making it compatible with contemporary theories and practices in
dynamism of real life scenario. In other words, e-Instructors will become capable of cutting off
outdated topic, chapters and parts of the course and replace it with timely versions (ASTU-
EDC1). In contrast to the traditional teaching methodology where instructors at times have no
ways to prepare and maintain teaching material and leads to routines in every semester , e-

136| P a g e
Teaching portal provides e-Instructors to prepare, maintain, and retrieve standardize teaching
material, they have the tendency to update their teaching materials with passage of time
discarding outdated versions and including contemporary topics (PESC-EDC1). Overall, the
document reviews indicated that full access of the system to learning materials related to their
course syllabus supported with text, diagrams, videos and with teaching aids such as assessment
and quizzes promote self-learning. The quality of e-course content delivered through the learning
management system is critical to get students understand the course online resulting in overall
better course achievement.

Overall analysis results from all the data sources within and across cases analyses with regard to
e-learning contents indicated: Availability of needed information with regard to the course
syllabus within the e-learning system, an e-learning content that is clear, concise, and updated to
reflect the ongoing course syllabus, an e-learning system that fulfills quality of e-learning
contents in terms of writing, images, videos, style, grammar and knowledge are determinant
factors for e-learning systems success. In general, an e-learning content which is designed in a
way that includes up-to-date learning materials, relevant content and maintaining the course
quality play great role for the usefulness and satisfaction of e-learning systems for students and
instructors. Accordingly, e-learning content quality is cited as one of the determinant factors of
e-learning systems success.

4.2.6. E-learning Systems Outcome

The study analysis resulted in the generation of three lower-level categories: time savings, cost
savings, and academic performance. These sub-categories were further clustered in to a higher
level category which is termed as ―E-learning Systems Outcome”. Table 20 presents a
summary of the definitions of the lower-level categories and the respective higher level-category
as well as representative logs.

137| P a g e
Table 20: E-learning Systems Outcome

Construct or Lower-level categories Representative logs Total


Higher level- number
category of
similar
logs
-“When instructors and students save
their time while using e-learning systems
in the teaching and learning process.
Time savings: the degree of efficiency When e-learning meets instructors‟
of the system while working, learning, teaching goal and saves their lecture
and commuting hours. (Bhuasiri et al., time as expected, the continual use of the 96
2012; Eom, 2011; Ozkan & Koseler, system will be guaranteed”.
2009; empirical data) -“How much time is spent in printing
course related materials as being a
student, an instructor, and as a
University administrative staff?”
“How much is spent for paper and
ink?”; “Can you imagine the cost for
printing mid examinations, final
Cost savings: the degree of cost savings examinations at department, school and
in terms of frequent travels by students University level?; What do you think the
E-learning
and instructors to classes and other amount to be spent for printing all
systems outcome: 91
institutional related costs (Bhuasiri et al, course outlines in the curriculum of a
The overall benefit
2012; Holspale and Lee-Post, 2006; department, schools? How much money
of the system to its
DeLone & McLean, 2003; empirical spent per department in printing course
ultimate users
data). outlines of courses? In a semester in
(Wang et al.,2007)
regular program? How much in a school
per semester? How many people are
engaged in doing so?”
-“When it contributes to students‟
improvements in academic performance
through its better course quality,
Enhanced Academic and teaching relevant and up-to-date e-learning
performances: the degree to which use content with a prompt response to their
of the system brought increase in the requests”.
academic or teaching performance of -“It improves our teaching performance 44
users of the system (Eom,2011; Freeze with the opportunity to use various
et al., 2010; Holspale and Lee-Post, teaching aids while being online such as
2006; Khan, 2005; empirical data) sharing online links to additional
materials and helping students with their
specific needs at anytime from
anywhere”.

138| P a g e
Accordingly; as can be seen in Figure 40, this higher level category received a total of 243
references from within and across cases analysis from 26 data sources of several focus group
discussions, in-depth personal interviews, document reviews and observation notes. Analysis of
its lower lower-level categories shows that a total of 96 references were made to Time savings
from 25 data sources; 91 references were made to enhanced academic and teaching performances
from 25 data sources and 56 references were made to Cost savings from 22 data sources.

Fig. 40 E-learning Systems Outcome as Determinant Factor for E-learning Systems Outcome (Source:
NVivo‘s output, 2018)

4.2.6.1. Enhanced Academic or Teaching Performance, Time Savings and Cost Savings
These lower level categories are cited as factors determining of e-learning systems outcome (see
Figure 41).

139| P a g e
Fig.41 Nvivo‘s Qualitative Data Analysis Results of E-Learning Systems Outcome Lower Level
Categories (Source: NVivo‘s Output, 2018)

Students and instructors would ultimately question themselves the reason why they use the e-
learning systems in the teaching and learning process. The more the system brought about
enhanced academic and teaching performances with in lesser time and saving paper and other
related costs of accessing learning materials, the more the system would achieve its ultimate
designed objective. This is evidenced through various data sources.

Students from focus group discussions indicated: When the system ultimately contributes to
improvements to my academic performance, I think all the factors mentioned in terms of saving
our time to access learning materials online from anywhere at any time determine the success of
the ultimate use of the system (ECSU-SFG-III-2). Sure, it is true that… if use of the e-learning
system brought about enhanced academic performance much better than that of the conventional
class room set-up, the ultimate use of the system is much valuable in our learning process
(ASTU-SFG-II-1). Above all, getting better knowledge at any time from any place would result
in enhancing our understanding for the courses by use of the system (AMU-SFG-II-1). As far as
our instructors motivate us to attend a course through e-learning systems and there is better
internet connection, and access journals and articles online, it will improve our academic
performance than searching over Google or going to the library (as in most cases, the books that
are available in the library are limited in number compared to the numbers of students
demanding the books) (MU-SFG-I-3). I think the ultimate outcome of the system is in providing

140| P a g e
us better ways of understanding our courses in a lesser time (efficiently and effectively) which
results in the overall success of the system ((PESC-SFG-III-2). Most importantly as far as the
system saves our time and cost of printing handouts or even books and also whenever we get
more knowledge through sharing with our instructors at anytime from anywhere in which case
when it results in getting our course understanding better than the traditional class room setting
that is what we really want from the system (HRU-SFG-III-2).

E-learning experts from focus group discussions also indicated: When students register online
wherever they are, getting exam results online, access video lectures of their instructors any time
even if they miss their classes which will improve their academic performances, these will
ultimately cater the reason for use of the system (PESC-EEFG-I-3). If students can get all the
materials, ask questions, if it saves time and cost to access learning materials and take exams
online, this has lots of advantages for students that will improve their course grades (ECSU-
EEFG-I-2).

Similarly, in-depth personal interviews with ICT directors indicated: Whenever the academic
performances of students and teaching performances of instructors is enhanced by use of e-
learning system, the University‟s objective of ensuring quality education across the University
will be easily achieved (ECSU-ICTD). You can simply evaluate the e-learning systems outcome
by asking questions such as: How much money was spent per department in printing course
outlines of courses in a semester in regular program? How much in a school per semester? How
about in a semester at a University wide level? What do you think the amount to be spent for
printing all course outlines in the curriculum of a department, schools and overall academic
departments? Can you imagine the cost for printing mid-examinations, final examinations at
department, school and University level? What would be the cost of duplication of course
outline? How much is spent for paper and ink? How much time is spent in doing so? How many
people engaged? Clearly, these factors determine the e-learning systems outcome (ASTU-
ICTD).

Instructors from focus group discussions mentioned: If students are able to ask questions any
time and get prompt reply from their instructors that would be very helpful to enhance their
course understanding /knowledge (AMU-ISFG-I-3). When use of the system contributes in terms
of saving my teaching time being from anywhere and accesses my students online at any time

141| P a g e
determines the ultimate use of the system (HRU-ISFG-III-2). E-learning systems outcome need
to be measured in terms of saving our lecture hours and accessing students from anywhere and
anytime and ultimately, enhances our teaching performances by use of various teaching aids
over the LMS (ECSU-ISFG-III-1).

Summary of the data analysis show that this higher level category ‗e-learning systems outcome‘
is dependent on various factors. Its ultimate success is measured in terms of saving students‘ and
instructors‘ time and cost. Furthermore, the various teaching and learning aids it provides to
instructors and students, respectively, would enhance teaching performance to instructors and
enhanced understanding with the course for students. Accordingly, the empirical data shows that
e-learning systems outcome is the ultimate measure of e-learning systems success.

4.2.7. Learners’ E-learning Factor

This higher level category is regarded as one the factors that determines students and instructors
continued systems usage and satisfaction. The study analysis resulted in the generation of two
lower-level categories: computer self-efficacy and attitude towards e-learning systems. These
categories were further clustered in to this higher level category. Table 21 presents summary of
the definitions of the lower-level categories and their respective higher level-category.

142| P a g e
Table 21: Learners‘ E-learning Factor

Construct or Total
Higher level- Lower-level categories Representative logs number of
category similar logs

Computer self-efficacy: ―Instructors and students


conceptualized as one‘s personal come across from diverse
judgment about his or her ability to disciplines or different
use computer to complete a prior-computer
specific task (Eachus & Cassidy, experience. Some have
Learners’ E-
2006; empirical data). computer skills and others 84
Learning Factor:
do not. These resulted in
represents students‘
problems of having desired
behavioral response
level of computer self-
or attitude towards
efficacy to further use the
use of e-learning
system as desired”
systems (Arbaugh,
Attitude towards e-learning: “Students and instructors
2002)
refers to learners‘ impression do not have favorable
towards participating in e-learning attitude to use e-learning
84
activities for their better learning system in the teaching and
needs (Bhuasiri et al., 2012; learning process”.
empirical data).

Accordingly; as can be seen in Figure 42, this higher level category received a total of 168
references from within and across cases analysis from 30 data sources of several focus group
discussions, in-depth personal interviews, document reviews and observation notes. Analysis of
its lower lower-level categories shows that 84 references were made to Computer self-efficacy
from 28 data sources and 84 references were made to Attitude towards e-learning from 29 data
sources.

143| P a g e
Fig. 42 Learners‘ E-learning Factor (Source: NVivo‘s output, 2018)

4.2.7.1. Computer Self-Efficacy


Instructors‘ and students‘ e-learning systems usage and satisfaction depends on various factors
such as their prior computer experience related with success or failure or some sort of shocking
experience or some other form of emotional states. This is evident from various data sources (see
Figure 43).

Fig. 43 NVivo‘s Data Analysis Results of Learners‘ E-learning Factor: Computer Self-Efficacy (Source:
NVivo‘s Output, 2018)

Students from focus group discussions indicated: Most of us do not have strong computer
knowledge to use the system effectively and due to this problem, we tend not to appreciate the

144| P a g e
usefulness of the system (PESC-SFG-IV-8). Some of our friends come from remote areas which
do not have computers or laptops and even they do not have sufficient basic computer skills
which negatively affected the sustained usages of e-learning systems (MU-SFG-II-4). Most of us
come from remote areas and registration was online…You can‟t imagine how it was difficult for
us on how to register online let alone fully utilizing the LMS (PESC-SFG-II-5). Having a good IT
skill would motivate us for continual use of e-learning systems being from anywhere at any time
(MU-SFG-II-2).

Similarly, instructors from focus group discussions mentioned: both instructors‟ and students‟
confidence on their level of computer skills to use the e-learning system is one of the determinate
factors for satisfaction and sustained systems usages (HRU-SFG-III-6). Instructors‟ previous
computer experience (possessing the required computer skill) is one of the determinant factors to
enhance systems usage in the teaching and learning process (MU-ISFG-II-3). The adoption and
diffusion of IT and more specifically e-learning systems among students and instructors is still at
its infant stage (JU-ISFG-I-3). If students do not have basic computer skills to use e-learning
systems, instructors would be dependent to use the face-to- face teaching modality in order to
efficiently use their allotted course credit hours for the given semester (ECSU-ISFG-III-3). One
of the basic causes of low-computer literacy among students and instructors arise from the fact
that the Ethiopian educational system (i.e. at elementary and high school level) in most of the
cases do not incorporate adequate level of IT use (ECSU-ISFG-III-3).

E-learning experts from focus group discussions also pointed out: Instructors and Students
previous experience with computers is useful factor for the sustained usage rate of the system in
the teaching and learning process (HRU-EEFG-II-4). Computer interface and Typing dexterity
have serious challenges for students with low computer skills from continually using e-learning
systems (ASTU-EEFG-I-3).

In-depth personal interviews with ICT directors also show: Students and instructors have
problems of adopting with the University‟s LMS (BDU-ICTD).

145| P a g e
4.2.7.2. Attitude Towards E-learning

Learners‘ perceptions towards participating in e-learning systems use affects further use of
systems usage and satisfaction. This is evident from various data sources (see Figure 44).

Fig. 44 NVivo‘s Data Analysis Results of Learners‘ E-learning Factor: Attitude Towards E-learning
(Source: NVivo‘s Output, 2018)

Students from focus group discussions made the following remarks: e-learning system requires
internet and computer knowledge which affects our perceptions towards e-learning systems
usage (HRU-SFG-III-5). The problem is instructors are not willing to teach through e-learning
and we are also giving less attention because it is not implemented at the University wide level
unlike the traditional teaching system (ECSU-SFG-IV-3). Consistent e-learning awareness
program is required to develop a favorable attitude towards e-learning systems usage (AMU-
SFG-II-4). We are in the information age and learning through e-learning systems will make us
more adapted to information technology and when we graduate, it will make us competitive
compared to other University graduates (ASTU-SFG-III-2). E-learning helps us to have a strong
knowledge about using the latest IT technologies and the internet; therefore, working on it makes
us satisfied (MU-SFG-II-2). The positive attitude of instructors towards the use of e-learning
systems is also one of the determinant factors for our sustained use and satisfaction of e-learning
systems (PESC-SFG-III-6).

146| P a g e
Similarly, instructors from focus group discussions indicated: lack of e-literacy among
instructors has an impact on the perceptions of use of e-learning systems (PESC-ISFG-III-7).
Our low ICT skills have a negative impact on our use of e-learning systems (JU-ISFG-I-2).

E-learning experts from focus group discussions also indicated: most of the instructors and the
students are techno phobias which make e-learning to be impractical (HRU-EEFG-I-3).

In-depth personal interviews with ICT directors also show: e-learning based courses usually
involve anxiety both to instructors and students while using e-learning systems (AMU-ICTD).
Due to poor computer skills and lack of confidence with regard to their low-level of basic
computer knowledge (i.e. in specific IT applications such as Moodle to our case)), students‟
perceptions towards use of e-learning systems affect the continual use of e-learning systems
(MU-ICTD).

Summary of our data analysis shows that Learners‘ e-learning factor which comprises two major
variables: learners‘ computer self-efficacy and attitude towards e-learning affect further use of
and satisfaction with the use of e-learning systems. Accordingly, learners‘ e-learning factor is
identified as one of the determinant factors of e-learning systems success.

4.2.8. Instructors’ E-learning Factor

This higher level category is also found to be one of the determinants of instructors‘ continual
systems usage and satisfaction. The study analysis resulted in the generation of four lower-level
categories: Computer self-Efficacy, Timely response, Creating e-learning environment and
Attitude towards e-learning systems. These sub-categories were further clustered in to this higher
level category. Table 22 presents summary of the definitions of the lower-level categories, higher
level-category and representative logs from multiple data sources.

147| P a g e
Table 22: Instructors‘ E-learning Factor

Total
number
Construct or Higher
Lower-level categories Representative logs of
level-category
similar
logs
“For example I have sent an e-mail to
my colleagues to upload their
materials on Google drive and so that
Computer self-efficacy: refers to we can share whatsoever materials
instructor‘s personal judgment about his we have in common but no one was
or her ability to use computer to complete able to respond …the reason was not
a specific task such as e-learning systems because they were not willing to
(Eachus & Cassidy, 2006; empirical data). share but it was because they were
67
Instructors’ e- not familiar on how to upload and
learning factor: share through google drive and they
represents instructors‘ kept silent…this low-level of
behavioral response confidence to use their knowledge to
and attitude towards a wider application such as e-
the use of e-learning learning systems for teaching is one
systems (Lwoga, of the great challenges for the
2014; Bhuasiri et al., sustained systems usage”.
2012; Selim, 2007; “The problem is instructors are
Liaw et al., Attitude towards e-learning: refers not willing to teach via e-learning
2007;Khan, 2005 and to instructors‘ impression towards systems and we are also giving
empirical data) participating in e-learning activities less attention because it is not 95
(Bhuasiri et al., 2012; empirical data). implemented at University wide
level unlike the traditional
teaching system”.
Creating e-learning environment:
the degree to which the instructor ““Instructors need to motivate us
creates a conducive e-learning to use discussion forums over the
environment (i.e. arranging discussion LMS and so that we can share
67
forums, interaction, online assessment different course related concepts
and so on) to learners (Demirkan et al. without the limitations of time and
2010; Gunn 2010; Panda and Mishra place”.
2007; empirical data).
Timely response: the degree of When we ask and get answers
responsiveness of instructors to immediately over the e-learning
students‘ learning progress via the systems, there is no question about
LMS (Bhuasiri et al., 2012; Khan, the value of its importance while 50
2005; empirical data). getting prompt help on the topics
which we were not clear in class.

148| P a g e
Accordingly; as can be seen from figure 45, this higher level category received a total of 279
references from within and across cases analysis from 29 data sources of several focus group
discussions, in-depth personal interviews, document reviews and observation notes. Analysis of
its lower lower-level categories shows that 95 references were made to Attitude towards e-
learning from 28 data sources, 67 references were made to Computer self-efficacy from 25 data
sources, 67 references were made to Creating e-learning environment from 24 data sources, and
50 references were made to Timely response from 22 data sources.

Fig. 45 NVivo‘s Data Analysis Results of Instructors‘ E-learning Factor as Determining E-learning
Systems Success (Source: NVivo‘s Output, 2018)

4.2.8.1. Creating E-learning Environment

Instructors‘ commitment to create a conducive environment for use of all the e-learning systems
useful features such as discussion forums, online quizzes, additional links on the LMS and
related virtual activities are vital for e-learning systems sustained systems usage and satisfaction
and overall e-learning systems outcome. This is evident from various data sources (see Figure
46).

149| P a g e
Fig. 46 NVivo‘s Data Analysis Results of Instructors‘ E-learning Factor: Creating an E-learning
Environment (Source: NVivo‘s Output, 2018)

Students from focus group discussions mentioned: Instructors should motivate us to use the
online resources and create a means of interaction (MU-SFG-III-4). The University must insist
all staff involved in e-learning to help us while we face technical problems with the use of the
system (HRU-SFG-III-5). When we get our instructors always willing to help while online, as far
as our instructors motivate us to learn the course through e-learning systems, and have access to
journals/ articles online, it will improve our performance saving our time rather than searching
over Google or going to the library (as in most cases, the books that are available in the library
are limited in number) (ECSU-SFG-IV-3). We want to use e-learning in all of our courses but
the problem is instructors are not willing to fully support us to deliver the course online (PESC-
SFG-IV-6).

Instructors from focus group discussions also indicated: Students would see e-learning being
useful if it gives personalized service for the student while interacting (getting instructors prompt
reply) with his/her instructor and classmates over the e-learning portal (ECSU-ISFG-III-2).
Providing continuous instructors‟ feedback to students through the e-learning portal is one of
the determinant factors of students‟ continual systems usage (HRU-ISFG-II-4). Students in an e-
learning modality require strong support from the institution and instructors as well (ASTU-
ISFG-II-1).

150| P a g e
Similarly, e-learning experts also mentioned: In addition to getting technical support from the
concerned department of the college, if instructors are providing all the course support to their
students through the LMS, this will also critically determine the success of the system (PESC-
EEFG-III-2). E-learning requires time and commitment from the instructors‟ side while
preparing e-lecture materials and communicating with students (MU-EEFG-II-2).

In-depth personal interviews with ICT directors also indicated: If instructors support students
whenever problem persists, students will be encouraged to use the system and its usefulness will
drive them to continually use it and also they will become satisfied with the system (PESC-
ICTD).

4.2.8.2. Computer Self-Efficacy

Instructors‘ prior knowledge about computer (IT) determines the level of use of e-learning
systems and satisfaction. This is evidenced from various data sources (see Figure 47).

Fig. 47 NVivo‘s Data Analysis Results of Instructors‘ E-learning Factor: Computer Self-Efficacy
(Source: NVivo‘s Output, 2018)

Instructors from several focus group discussions mentioned: Instructors‟ and Students‟
confidence in their computer skills is one of the determinant factors of e-learning systems usage
(MU-ISFG-II-4). E-learning requires internet and computer knowledge (PESC-ISFG-II-2).

E-learning experts from focus group discussion also forwarded the following remarks:
Instructors need to have adequate computer skills before using the system; otherwise, it will
negatively affect the success of the system (HRU-EEFG-II-3). Previous knowledge of IT and
Computer is one of the critical factors for the success of e-learning systems (ECSU-EEFG-I-3).

151| P a g e
In-depth personal interviews with ICT directors also mentioned: instructors lack of appropriate
computer/IT skills to use the system hinders the successful implementation of any e-learning
initiative (JU-ICTD).

4.2.8.3. Attitude towards E-learning

Instructors‘ attitude towards use of e-learning systems to support the teaching and learning
process determines systems usage and satisfaction of e-learning systems. This is evidenced from
various data sources (see Figure 48).

Fig. 48 NVivo‘s Data Analysis Results of Instructors‘ E-learning Factor: Attitude Towards E-learning
(Source: NVivo‘s Output, 2018)

Instructors from several focus group discussions mentioned: Some of us are techno phobias
which makes e-learning to be impractical in most of the cases (JU-ISFG-I-3). Incentive for
instructors either in financial or non-financial forms has a positive impact in order to develop a
positive attitude towards the use of e-learning systems for teaching (MU-ISFG-II-1). There must
be regular e-learning awareness and training programs over the use of e-learning as the
adoption and diffusion of IT among students and instructors is still at its infant stage in our
University (JU-ISFG-I-3).

152| P a g e
E-learning experts from several focus group discussions also indicated: Resistance to learn new
IT applications (such as e-learning system) is one of the critical problems of instructors to adopt
e-learning for teaching (HRU-EEFG-II-2).

Similarly, in-depth personal interviews with ICT directors also showed: Though instructors
showed a positive attitude towards e-learning systems indicating that they were more than
willing to adopt and use e-learning for teaching, their level of applying e-learning systems as
their teaching modality is still very low (MU-ICTD). I can see that while a favorable attitude
towards e-learning enables individuals to take initiatives to adopt it, removal of institutional
barriers to e-learning (such as lack of adequate e-learning infrastructure, lack of incentives and
other institutional barriers) becomes a necessary condition (ECSU-ICTD). Teaching culture,
resistance to change, awareness about e-learning are the most determinant factors of e-learning
systems for sustained usage among students and instructors (PESC-ICTD). Instructors‟
resistance towards adoption of e-learning systems in the teaching and learning process requires
University-wide strong change management (JU-ICTD).

4.2.8.4. Timely Response

The degree instructor‘s timely response towards students request determines students‘ level of
systems usage and satisfaction. This is evidenced from various data sources as can be seen in
Figure 49.

Fig. 49 NVivo‘s Data Analysis Results of Instructors‘ E-learning Factor: Timely Response (Source: NVivo‘s
Output, 2018)

153| P a g e
Students from focus group discussions made the following remarks: In most of the cases, our
instructors do not give us prompt feedback with regard to our assignment and test results but if they
reply on time via the LMS, it would greatly contribute to correct our mistakes whenever there is and
ultimately, improve our academic performances (MU-SFG-I-3). We want to use e-learning in all of
our courses but the problem is instructors are not willing to deliver the course on timely basis or
even to properly use e-learning systems to support the teaching and learning process (AMU-SFG-II-
2).

Instructors from focus group discussions also mentioned: Most instructors are not responsive to
assist students via the LMS (ECSU-ISFG-III-1). Had there been instructors prompt response towards
students‟ online educational service request, the use of the system and satisfaction of students for
further use of the system could have been tremendously improved (ASTU-ISFG-III-2).

Similarly, e-learning experts from focus group discussion also pointed out: apart from our technical
timely response, prompt feedback about students‟ course assessment results is one of the determinant
factors for systems usage and users satisfaction (BDU-EEFG-I-3).

In-depth personal interviews with ICT directors also showed: Instructors should provide timely
response to students and so that students will be motivated to continually use the system implying
that the ultimate objective of e-learning systems success would have been realized accordingly
(PESC-ICTD).

In general, summary of the data analysis result shows that instructor‘s computer self efficacy, attitude
towards e-learning, creating virtual environment, and timely response are determinate factors for
students and instructors e-learning systems use and satisfaction. Accordingly, instructors‘ e-learning
factor related to these lower level categories are cited as one of the determinant factors of e-learning
systems success.

4.3 Shaping the Study’s Hypothesis

4.3.1 Institutional Support Quality

Institutional support in terms of access to computer labs, access to internet, providing consistent
e-learning training, motivating instructors to use e-learning systems for teaching, encouraging
students to continuously use e-learning systems for learning are all important factors that impact
systems usage and satisfaction (Wu et al., 2010; Ozkan & Koseler, 2009; Lee, 2008; Chiu et al.,

154| P a g e
2007). The study‘s qualitative data analysis result also shows that devising a system of protecting
teaching materials of instructors, quality assurance mechanisms to control implementation of e-
learning systems, maintaining instructors‘ motivation, developing and implementing e-learning
policy, availing e-learning infrastructure, and providing consistent technical support and
computer training are elements that users of the system expects from the institution while
implementing e-learning systems. The degree of institutional support quality has an impact on
the motivation of instructors, on the attitudes of instructors towards e-learning systems, creating
e-learning environment, and developing an up-to-date e-learning contents and responding to
students‘ online requests on time. The quality of e-learning system is also dependent on the
institutional support in terms of the availability/ readiness of e-learning infrastructure, and degree
of technical support. Accordingly, institutional support quality determines the continual use of e-
learning systems and user satisfaction. Furthermore, institutional support quality has impact on
instructors‘ e-learning factor as determined by the level of computer training, protecting their
teaching materials, prompt technical support and other institutional support factors. Similarly,
institutional support quality has impact on e-learning content quality and e-learning systems
quality. Hence, the study hypothesizes that:

H1: Institutional support quality has a direct effect on user satisfaction

H2: Institutional support quality has a direct effect on systems usage

H3: Institutional support quality has a direct effect on instructors‟ e-learning factor

H4: Institutional support quality has a direct effect on e-learning content quality

H5: Institutional support quality has a direct effect on e-learning systems quality

4.3.2. Learners’ E-learning Factor

Learners‘ perception about the usefulness of e-learning systems for learning plays vital role on
the sustained use and satisfaction of e-learning systems (Bhuasiri et al., 2012; Selim, 2007).
Students‘ characteristics towards use of e-learning systems depend on various impact factors.
The study identified students‘ computer self-efficacy and attitude towards e-learning as major
Learners‘ e-learning factors affecting students‘ further use of e-learning systems and satisfaction.
Hence, the study hypothesizes that:

155| P a g e
H6: Learners‟ e-learning factor has direct effect on User satisfaction

H7: Learners‟ e-learning factor has direct effect on Systems usage

4.3.3. E-learning Systems Quality

E-learning system quality refers to instructors‘/learner‘s judgment about the e-learning systems
performance characteristics (Chiu et al., 2007) and is measured in terms systems functionality,
ease of use, interactivity, flexibility, systems responsiveness and security (Lwoga, 2014,
Holsppale & Lee-Post, 2006, and DeLone & McLean, 2003). Several studies show that e-
learning system quality is directly related with user satisfaction and systems usage (Ozkan &
Koseler, 2009; Lin, 2007; DeLone and McLean). The study‘s data analysis result also shows that
e-learning systems quality is a function of many variables/lower level categories such as Systems
user friendly, Systems useful features, System response, Systems availability, Systems ease of
use, System Interactivity, and Systems Security. E-learning systems quality has been cited as one
of the determinants of e-learning systems success in terms of enhancing users‘ satisfaction and
systems usage. The qualitative analysis result also shows that the systems useful features has
impact on e-learning content quality in terms of availing useful plug-in features of the LMS to
accommodate course flexibility, ease of updating course contents and providing important
resource links across various educational portals and having adequate data storage features while
uploading relevant files (i.e. without limiting the sizes of uploaded files on the LMS which
affects/constraints the e-learning content quality while instructors want to include teaching aids
such as use of videos, audios, and animations/illustrative teaching aids to provide further aids for
abstract concepts of a given course). The updated ISS model (DeLone and McLean, 2003) found
that information quality and system quality directly affect systems use and user satisfaction.
Several studies such as Seddon and Kiew (1996), DeLone and McLean (2004), McGill et al.
(2003), Iivari (2005), Lin (2007), Eom (2011), Lwoga (2014), Tossy et al. (2017) and Yakubu &
Dasuki (2018) adopted the updated ISS model dealt with system quality and information quality
as exogenous constructs and established there is no causal effect between them. However, some
studies found the causal effect of systems quality on information quality. For example, Almutairi
and Subramanian (2005) and Gorla et al. (2010) found that system quality significantly impacted
information quality. Based on feedback obtained from e-learning experts during the qualitative
phase of the study and in line to studies such as Bhuasiri et al. (2012) and Ozkan & koseler

156| P a g e
(2009), the constructs‘ naming for systems quality and information quality were re-named as e-
learning systems quality and e-learning content quality, respectively, in order to reflect the
research context. Bhattacherjee (2012) also stressed that researchers need to adhere to their own
construct‘s naming convention in order to accurately represent the research context than just
replicating existing constructs‘ naming from extant literature. Hence, the study hypothesizes that:

H8: E-learning systems quality has direct effect on e-learning content quality

H9: E-learning systems quality has direct effect on user satisfaction

H10: E-learning systems quality has direct effect on systems usage

4.3.4. E-learning Content Quality

Various studies show that the quality of learning materials on the LMS, its timeliness, extent of
relevance to the course all impact the learning process (Holspale & Lee-Post, 2006). Ozkan and
Koseler (2009), Sun et al. (2008), Arbaugh (2002) and McKinney et al. (2002) have found out
that e-learning content quality has direct impact on systems usage and user satisfaction. The
study‘s analysis also shows that availability of needed information with regard to the course
syllabus via the e-learning system, an e-learning content that is clear, concise, and free from
confusion and ease to understand, an e-learning content that is updated to reflect the ongoing
course syllabus, an e-learning system that fulfills the quality of e-learning contents in terms of
writing, images, videos, style, grammar and knowledge have direct effect on user satisfaction
and systems usage. Accordingly, the study hypothesizes that:

H11: E-learning content quality has direct effect on User satisfaction

H12: E-learning content quality has direct effect on Systems usage

4.3.5. Instructors’ E-learning Factor

The role of the instructor in the successful implementation and sustained usage of e-learning
systems is significant (Sun et al., 2008; Selim, 2007). Instructors‘ attitude towards e-learning
systems and how they encourage students to use e-learning systems affect the sustained usage of
the system and associated user satisfaction (Lwoga, 2014; Ozkan & Koseler, 2009). The
qualitative analysis result also showed that instructors‘ e-learning factor expressed in terms of

157| P a g e
computer self-efficacy, attitude towards e-learning systems, creating e-learning environment and
timely response has impact on systems usage and user satisfaction. Furthermore, the quality of
the courses uploaded on the LMS, up-dating the course with relevant materials as per the course
curriculum are subject to instructors‘ e-learning factor.

Hence, the study hypothesizes that:

H13: Instructors‟ e-learning factor has direct effect on e-learning content quality

H14: Instructors‟ e-learning factor has direct effect on User satisfaction

H15: Instructors‟ e-learning factor has direct effect on Systems usage

4.3.6. User Satisfaction

The User satisfaction construct is measured in terms of the user‘s satisfaction as a result of the
system performance (Petter & McLean, 2009; DeLone & McLean, 2003). The effectiveness of
the IS as the effect of the product of the IS on the user determines the ultimate outcomes of the e-
learning systems success. Summary of analysis of the empirical findings shows that instructors
and students would frequently use e-learning systems as far as they are satisfied with the use of
the system in the teaching and learning process which ultimately determines the e-learning
systems outcome. Furthermore, the study finds out that the more students and instructors are
satisfied with the overall e-learning process, the more they will use the system. Hence, the study
hypothesizes that:

H16: User satisfaction has direct effect on systems usage


H17: User satisfaction has direct effect on e-learning systems outcome

4.3.7. Systems Usage

As described by DeLone and McLean (2003) and Petter & McLean (2009), this construct is the
consumption of the products or services of an IS or the frequency of use of IS which measures
from a visit of a Web site, to navigation within the site to informational retrieval, and to the
execution of a transaction. This construct is conceptualized in our study as the dependability and
the frequency of use of an e-learning system from visiting the LMS to navigating across all the
features of the site to fully execute the teaching and learning process on the LMS (Hassanzadeh
et al., 2012; Lee-Post, 2009; Holsapple and Lee-Post, 2006; empirical data). This study shows

158| P a g e
that e-learning systems usage has impact on e-learning systems outcome due to the fact that
continual systems usage implies some system benefits (systems outcome) as expressed in terms
of time savings, cost savings and enhanced performance depending on the types of the users.
Hence, the study hypothesizes that:

H18: Systems usage has direct effect on e-learning systems outcome

4.3.8. E-learning Systems Outcome

This construct or higher level category determines the overall users benefit while using the
system (DeLone and McLean, 2003; the study‘s qualitative empirical data). This study finds out
that the overall success of an e-learning system is ultimately measured in terms of the benefits it
provides to students and instructors. Instructors expect the system to enhance his/her teaching
performance with the systems‘ various useful features. The same is true to students; students
expect the system to enhance their course understanding in a way that would save their time and
enhance their academic performance. The study findings show that this construct is the ultimate
system outcome which is dependent on students‘ continual systems usage. Similarly, user
satisfaction determines the level of realizing the benefit of using the system. If users are not
satisfied while using the system, the benefit of the system (i.e. expressed in terms of e-learning
systems outcome) wouldn‘t be meaningfully realized (DeLone and McLean, 2003; the empirical
findings).

4.4. The Study’s Proposed Model

This study proposed an e-learning systems success model based on the qualitative empirical
findings taking the case of Ethiopian higher education institutions (see Fig. 50). The model
building process followed Eisenhardt‘s (1989) theory development technique. Although it is not
common to start theory building from extant literature, such a technique guides the data
collection process which further enhances the researcher‘s sensitivity to understand the meaning
of bits of contextual data (Eisenhardt, 1989). Accordingly, the data collection process which was
guided by prior constructs (i.e from the updated ISS model of DeLone and McLean (2003))
resulted in generating new constructs. To analyze the qualitative data, qualitative analysis
software (i.e. Nvivo Pro software version 11) was used. Consequently, the study found out 8
categories and 31 sub-categories. Accordingly, three new categories (i.e. Institutional Support

159| P a g e
Quality, Instructors‘ e-learning factor, and Learner‘s e-learning factor) and five existing
categories from the ISS model (E-learning Systems Quality, E-learning Content Quality,
Systems Usage, User‘s Satisfaction and E-learning Systems Outcome) emerged from the data.
Furthermore, 18 hypotheses were formulated based on the association of the categories from the
qualitative empirical findings. To this end, the model was further validated by use of Structural
Equation Modeling as to be presented in chapter five.

160| P a g e
Institutional support
quality

H5 H1
H4 H3 H2

Learners’ e-learning factor H6


User satisfaction
H17

H14
H9 H16
E-learning systems
H11 outcome
Systems quality
H10
H7 H18

Systems Usage
H8

E-learning content quality H12

H15
H13

Instructors’ e-learning
factor

Fig.50 Proposed E-learning Systems Success Model

161| P a g e
4.5. Chapter Summary

This chapter presented an e-learning systems success model taking the case of Ethiopian Higher
Education Institutions context. Qualitative case study research method was used to identify
factors determining e-learning systems success and develop the model. Several focus group
discussions with instructors, students and e-learning experts; in-depth personal interviews with
ICT directors of nine Ethiopian higher education institutions adopting e-learning systems;
document reviews and the researcher‘s observations were used as data collection methods in
order to triangulate the study findings. Open-ended questions were prepared and forwarded to
focus group discussion participants and in-depth personal interviews conducted with ICT
directors of the institutions under study. The data collection, coding and analysis procedures
were performed until no new themes emerged and performing additional coding resulted in
repetition of codes. To this end, thirty-one lower level and eight higher level categories were
finally identified from within and across cases. The relationships among the categories were
determined based on the findings of the empirical data. Eighteen hypotheses were formulated to
represent the relationships among the higher level categories. Finally, the model was proposed
for quantitative validation to be discussed in the next chapter.

162| P a g e
Chapter Five
Model Testing

5.1 Introduction

This chapter reports results of quantitative data analysis of students‘ and instructors‘ samples
data while testing the proposed model. The first section of the chapter presents results from
testing the model over student samples data and the second section presents results from testing
the model over instructors‘ samples data. SPSS (version 20) was used to analyze the descriptive
statistics and SMART-PLS (version 3.0) was used to test the model on two levels. The model
was tested in two dimensions: testing the measurement model and testing the structural model.
The measurement model tests measures of the model‘s constructs. On the other hand, the
structural model tests the relationships between the model‘s constructs as represented by the
eighteen hypotheses formulated in chapter four.

5.2 Testing the Proposed Model over Students’ Sample

5.2.1 Descriptive Statistics of Students’ Sample

Prior to data analysis, this section reports the descriptive statistics of the students‘ sample data in
order to summarize and describe the data set whether it meets the statistical requirements before
proceeding to the data analysis procedures. Straub et al. (2004) pointed out that before
proceeding to the data analysis procedure, the data should be checked in relation to handling
missing data, outliers, normality tests, and doubtful response patterns. Accordingly, this section
presents the techniques used to handle the quality of the data set before proceeding to the data
analysis section.

5.2.1.1 Data Outliers and Doubtful Response Patterns

In the study‘s questionnaire, which employed a five point Likert scale, the two additional options
‗Not Applicable‘ and Don‘t know‘ were added in order to minimize data outliers and suspicious
response patterns. These two options were used to capture respondents‘ opinions in cases where
the five point Likert scale items couldn‘t address their specific responses. These methods
minimize respondents‘ unreliable response, common method bias and straight lining (Krosnick et
al., 2002). This was also done in line with feedback received from participants involved in the

163| P a g e
pilot testing of the questionnaire. While analyzing the data, these two responses were treated as
missing values (Holman & Glas, 2005). As a way to minimize data outliers and doubtful
response patterns, this methodological approach was also used in other similar IS studies such as
in Alsabawy (2013).

5.2.1.2 Missing Values

Missing values occur when the respondent intentionally or unintentionally did not respond to any
of the items of the questionnaire. According to Hair et al. (2013), the proportion of the missing
values in relation to the observed data on a given questionnaire may affect bootstrapping.
Accordingly, they proposed an acceptable threshold value of less than 15% per indicator in a
given questionnaire. Otherwise, the questionnaire must be removed from the data analysis
procedure. Only four missing values were observed in this study. These missing values belong to
items namely: LEF6, IEF2, IEF4 and IEF5 in four different students‘ questionnaires. Therefore,
no questionnaire was removed from the data analysis procedure.

According to Hair et al. (2013), there are two ways of handling missing values while employing
SEM-PLS. These are Mean Replacement and Case-Wise Deletion. Ringle et al. (2012)
recommends Mean Replacement to be used when missing values are less than 5% per indicator.
The researcher used Mean Replacement method.

5.2.1.3 Data Distribution

Smart PLS does not require the data to be normally distributed (Hair et al., 2017). This is one of
the peculiar features of the software unlike any other SEM analysis tools. Since this study
employed SEM-PLS, testing for normally of data was not required.

5.2.1.4. Sample Size and Distribution

Adopting theoretical sampling method (Yin, 2003; Eisenhardt, 1989) as discussed in chapter
three, the researcher purposefully selected 96 students based on the selection criteria discussed in
chapter three. Table 23 presents the students‘ sample size.

164| P a g e
Table 23: Students‘ Sample Size

Sample University adopting e-learning Students’ sample


(as supportive teaching and learning tool)
Arbaminch University 29

PESC Information Systems College 24

Ethiopian Civil Service University 23

Haramaya University 20

Total no. of samples 96

5.2.1.5 Respondents’ Demography


This sub-section presents the respondents demography in terms of gender and age distribution:

1. Gender
About 81% and 19% of the students‘ population are male and female respondents, respectively
(see Table 24).

Table 24: Gender Distribution

(Source: SPSS version 20 output, 2018)

2. Age
The age distribution of the students‘ respondents shows that about 73%, 24% and 3% are in the
age ranges of 18-28, 28-38 and 38-48, respectively. It can be understood from Table 25 that the
majority of age distribution of the students‘ age distribution lies in the age bracket between 18-
28.

165| P a g e
Table 25: Age Distribution of Students‘ Respondents

(Source: SPSS version 20 output, 2018)

5.2.2 Testing the Model and its Hypothesis

This section presents the findings of the model testing in terms of assessing its measurement and
structural model tests. The threshold values discussed in Chapter three were used to evaluate the
test results of measurement and structural models. The data analysis results are presented in the
form of Tables and Figures to make the findings more understandable.

5.2.2.1 Validating the Measurement Model

The proposed model comprises of eight constructs. The assessment results of the reflective
measurement model were evaluated based on unidimensionality, convergent validity,
discriminate validity and internal consistency reliability. Smart-PLS was employed to eliminate
those items with outer loadings of less than the recommended threshold values. Table 26
presents results of the model constructs‘ overall reliability and validity test results.

166| P a g e
Table 26: Reliability and Validity Test Results

Reliability and Validity tests


The Study Model Constructs Cronbach’s Composite Average Variance
Alpha Reliability Extracted (AVE)
E-learning Content Quality 0.854 0.896 0.633
E-learning Systems Outcome 0.883 0.928 0.810
E-learning Systems Quality 0.858 0.898 0.637
Institutional Support Quality 0.911 0.931 0.694
Instructors‘ e-learning factor 0.912 0.934 0.739
Learners‘ e-learning factor 0.849 0.889 0.575
Systems Usage 0.842 0.905 0.761
User satisfaction 0.866 0.937 0.882

(Source: Smart PLS output, 2018)

Finally, discriminant validity tests were performed to assess the measurement model. Fornell
Larcker Criterion was used to test whether the model‘s constructs meet the discriminant validity
criteria. As can be seen in Table 27, the square roots of AVE values (Diagonal elements or
values) were higher than correlation values in the off-diagonal elements of corresponding rows
and columns. Accordingly, the model meets this criterion.

167| P a g e
Table 27: Discriminant Validity Test

Construct Matrix
Constructs
Constructs E-learning E-learning E-learning Institutional Instructors’ Learners’ Systems User
Content Systems Systems Support e-learning e-learning Usage Satisfaction
Quality Outcome Quality Quality factor factor
E-learning Content 0.795
Quality
E-learning Systems 0.716 0.900
Outcome
E-learning Systems 0.717 0.785 0.798
Quality
Institutional 0.721 0.707 0.694 0.833
Support Quality
Instructors’ e- 0.628 0.641 0.626 0.501 0.860
learning factor
Learners’ e-learning 0.542 0.534 0.521 0.440 0.596 0.758
factor
Systems Usage 0.675 0.770 0.715 0.759 0.443 0.445 0.872
User Satisfaction 0.676 0.850 0.703 0.679 0.555 0.412 0.737 0.939

(Source: Smart PLS Output, 2018)

5.2.2.2 Validating the Structural Model


After validating the measurement model in terms of reliability and validity tests of the model‘s
constructs, the next step was to validate the constructs‘ relationships and hypothesis. The
relationship among the constructs and testing the study‘s hypothesis were assessed based on
Bootstrapping PLS algorithm (see Table 28 for details).

168| P a g e
Table 28: Structural Model Hypothesis Testing for Bootstrapping Direct Effect Results

Hypothesis Direct Effects (Path) Path Coefficient Path Std Error T-Value P values at *Decision
(β) Coefficient (95% CIL)
(β) Strength
H1 Institutional Support Quality→ User Satisfaction 0.276 Moderate 0.146 1.891 0.030* Supported
H2 Institutional Support Quality→ Systems Usage 0.357 Moderate 0.097 3.685 0.000*** Supported
Institutional Support Quality → Instructors‟ e- 0.501 Strong 0.095 5.294 0.000*** Supported
H3
learning factor
Institutional Support Quality → E-learning Content 0.398 Moderate 0.129 3.084 0.001*** Supported
H4
Quality
Institutional Support Quality → E-learning Systems 0.694 Strong 0.052 13.368 0.000*** Supported
H5
Quality
H6 Learners e-learning factor → User Satisfaction 0.082 Weak 0.097 0.624 0.267N.S Not Supported
H7 Learners e-learning factor → Systems Usage (0.060) Weak 0.079 1.038 0.150N.S Not Supported
E-learning Systems Quality → E-learning Content 0.283 Moderate 0.159 1.775 0.031** Supported
H8
Quality
H9 E-learning Systems Quality → Users‟ Satisfaction 0.320 Moderate 0.118 2.720 0.003** Supported
H10 E-learning Systems Quality → Systems Usage 0.238 Moderate 0.122 1.952 0.026* Supported
H11 E-learning Content Quality → User Satisfaction 0.200 Weak 0.144 1.391 0.078N.S Not Supported
H12 E-learning Content Quality → Systems Usage 0.080 Weak 0.130 0.678 0.249N.S Not Supported
Instructors‟ e-learning factor → E-learning Content 0.252 Moderate 0.113 2.24 0.013* Supported
H13
Quality
H14 Instructors‟ e-learning factor → User Satisfaction 0.127 Weak 0.095 1.338 0.091N.S Not Supported
H15 Instructors‟ e-learning factor → Systems Usage (0.171) Weak 0.93 1.845 0.033* Supported
H16 User Satisfaction → Systems Usage 0.329 Moderate 0.082 4.029 0.000*** Supported
H17 User Satisfaction →E-learning Systems Outcome 0.619 Strong 0.098 6.311 0.000*** Supported
H18 Systems Usage →E-learning Systems Outcome 0.314 Moderate 0.105 2.976 0.001*** Supported

o Significant at level 0.05; **Significant at level 0.01, ***Significant at level 0.001; N.S: Not Significant
o Path coefficients (β) strength: 0.2 (Weak); Values between 0.2 and 0.5 is (Moderate); and more than 0.50 is (Strong)
Decision: 13 out of 18 hypothesis get supported

169| P a g e
The predictive power of the proposed model was assessed in terms of Coefficient of
Determination (R2). Table 29 presents summary of the predictive power of the overall proposed
model in terms R2 values.

Table 29: Summary of Coefficient of Determination (R2) Values

Furthermore, Figure 51 presents the overall results of the assessment of measurement and
structural model tests.

(Source: Smart PLS Output, 2018)


Fig. 51 The Overall Proposed Model Tests over Students‘ Samples

170| P a g e
5.2.2.3 Discussions of Structural Model Tests and Hypothesis Testing

This sub-section describes the structural model test results as presented in Table 28, Table 29 and
depicted in Figure 51 above.

1. Institutional Support Quality

After undergoing the structural model tests under the students‘ samples (see Table 28 and Figure
49), the direct effects of institutional support quality on User Satisfaction, Systems Usage,
Instructors‘ E-learning Factor, E-learning Content Quality, and E-learning Systems Quality were
assessed and the results were (β 0.276, P 0.030), (β 0.357, P 0.000), (β 0.501, P 0.000), (β 0.398,
P 0.001), (β 0.694, P 0.000), and (β 0.501, P 0.000), respectively. This leads to confirming all
the hypotheses: (H1) ‗Institutional support quality has a direct effect on user satisfaction‟, (H2)
‗Institutional support quality has a direct effect on systems usage‟, H3 „Institutional support
quality has a direct effect on instructors‟ e-learning factor‟, (H4) ‗Institutional support quality
has a direct effect on e-learning content quality‟, (H5) „Institutional support quality has a direct
effect on e-learning systems quality‟.

2. Learners’ E-Learning Factor


The direct effects of Learners‘ e-learning factor on user satisfaction and systems usage were
assessed and the results were (β 0.082, P 0.267) and (β -0.060, P 0.150), respectively (see Table
28 and Figure 51). Accordingly, the results of SMART PLS analysis show that the following two
hypotheses were rejected: (H6) ‗Learners‘ e-learning factor has direct effect on User satisfaction‘
and (H7) ‗Learners‘ e-learning factor has direct effect on Systems usage‘. However; the effect
size (f2) of H6 and H7 shows a value of 0.0049 and 0.013, respectively, signifying that Learners‟
e-learning factor has still some small direct effect size both on User satisfaction and on Systems
usage, respectively.

3. E-Learning Systems Quality

This construct was assessed in relation to its direct effects on e-learning content quality, user
satisfaction and systems usage and the Smart PLS analysis results were (β 0.283, P 0.031), (β
0.320, P 0.003), (β 0.238, P 0.026), respectively (see Table 28 and Figure 51). Accordingly, the
following three hypotheses were supported: (H8) „E-learning Systems quality has direct effect on

171| P a g e
e-learning content quality‟, (H9) „E-learning Systems quality has direct effect on user
satisfaction‟ and (H10) „E-learning Systems quality has direct effect on systems usage‟.

Furthermore, the Coefficient of Determination (R2) showed that 48.2 percent of the variance in e-
learning systems quality is explained by institutional support quality (see Table 29). Following
the threshold values recommended in chapter three, this value of R2 is regarded as Moderate.

4. E-Learning Content Quality

The direct effects of e-learning content quality on user satisfaction and systems usage were
assessed and the results were (β 0.200, P 0.078) and (β 0.080, P 0.249), respectively (see Table
28 and Figure 51). Accordingly, the results of the Smart PLS analysis show that the following
two hypotheses were rejected: (H11) „E-learning content quality has direct effect on User
satisfaction‟ and (H12) „E-learning content quality has direct effect on Systems usage‟.

Furthermore, Smart PLS analysis result for the Coefficient of Determination (R2) showed that
64.8 percent of the variance in e-learning content quality is explained by institutional support
quality, e-learning systems quality, and instructors‘ e-learning factor (see Table 29).
Accordingly, this value of R2 is regarded as Moderate or slightly closer to be Substantial.

5. Instructors’ E-Learning Factor

This construct was assessed in relation to its direct effects on e-learning content quality, user
satisfaction and systems usage and the Smart PLS analysis results were (β 0.252, P 0.013), (β
0.127, P 0.091), (β -0.171, P 0.033), respectively (see Table 28 and Figure 51). Accordingly, the
following two hypotheses were supported: (H13) ‗instructors‘ e-learning factor has direct effect
on e-learning content quality‘ and (H15) ‗instructors‘ e-learning factor has direct effect on
Systems usage‘; and the hypothesis (H14) „instructors‟ e-learning factor has direct effect on
User satisfaction‟ was rejected.

Smart PLS analysis result with regard to the Coefficient of Determination (R 2) showed that 25.1
percent of the variance in instructors‘ e-learning factor is explained by institutional support
quality (see Table 29 and Figure 51). Accordingly, this value of R2 can be considered as Weak.

172| P a g e
6. User Satisfaction

User satisfaction‘s direct effects on systems usage and e-learning systems outcome were assessed
and the SMART PLS analysis results show that (β 0.329, P 0.000) and (β 0.619, P 0.000),
respectively (see Table 28 and Figure 51). This led us to support the two hypotheses: (H16):
User satisfaction has direct effect on systems usage; and (H17): User satisfaction has direct
effect on e-learning systems outcome.

Smart PLS analysis result for the value of Coefficient of Determination (R 2) showed that 59.3
percent of the variance in user satisfaction is explained by learners‘ e-learning factor,
institutional support quality, e-learning systems quality, e-learning content quality, and
instructors‘ e-learning factor (see Table 29). Accordingly, this value of R2 is regarded as
Moderate.

7. Systems Usage

The direct effect of systems usage on e-learning systems outcome was assessed and the Smart-PLS
analysis result showed that (β 0.314, P 0.001). The result led us to support the hypothesis: (H18)
‗Systems usage has direct effect on e-learning systems outcome‘ (see Table 28 and Figure 51).

Furthermore, Smart analysis result for the Coefficient of Determination (R 2) showed that 70.3
percent of the variance in systems usage is explained by learners‘ e-learning factor, institutional
support quality, e-learning systems quality, e-learning content quality, instructors‘ e-learning
factor and user satisfaction (see Table 29). Accordingly, this value of R2 can be considered as
Substantial.

8. E-Learning Systems Outcome

With respect to this endogenous construct, Smart-PLS analysis result for the Coefficient of
Determination (R2) showed that 76.8 percent of the variance in e-learning systems outcome is
explained by user satisfaction and systems usage (see Table 29 and Figure 51). Accordingly, this
value of R2 can be considered as Substantial.

173| P a g e
5.2.2.4 Effect Size and Predictive Relevance

Predictive Relevance and Effect Size were further used to evaluate the inner model.

1. Predictive Relevance (Q2)

Based on blindfolding procedure, Q2 evaluates the predictive validity of a large and complex
model using Smart PLS. In the Smart PLS blindfolding setting window, an omission distance
(OD) between values of 5 to 10 is recommended by Hair et al. (2012). Using an omission
distance of 7, the study obtains Q2 values for three criterion (outcome) constructs: User
satisfaction on Systems usage, User satisfaction on e-learning systems outcome and systems
usage on e-learning systems outcome as 0.05, 0.30 and 0.07, respectively. The analysis results
showed sound predictive relevance of the model.

2. Effect Size (f2)

The effect size impact indicator for three criterion (outcome) constructs: User satisfaction on
Systems usage, User satisfaction on e-learning systems outcome and Systems usage on E-
learning systems outcome were 0.15, 0.76 and 0.19 values indicating the effect sizes as medium,
large and medium, respectively.

174| P a g e
5.3 Testing the Proposed Model over Instructors Samples

5.3.1 Descriptive Statistics of Instructors’ Sample

The same statistical tools and techniques that were employed for the students‘ samples were used
to handle the descriptive statistics and model testing procedure for the instructors‘ samples.

5.3.1.1 Data Outliers and Doubtful Response Patterns

Similarly, the same procedures as that of the students‘ questionnaires were used to handle data
outliers and doubtful response patterns for the instructors‘ questionnaires while evaluating the
quality of the data before proceeding to the data analysis procedure.

5.3.1.2 Missing Values

Only six missing values were reported. These missing values were observed from items LEF3,
LEF4, EQ3, IEF1, IEF5, and INSPQ5 from six different questionnaires. The techniques
employed to handle the missing values were performed in a similar way of the students‘ samples.

5.3.1.3 Data Distribution

Normality tests for the instructors‘ samples were not also performed as it is not a requirement for
studies employing Smart-PLS software (Hair et al. 2012).

5.3.1.4 Sample Size and Distribution

Adopting theoretical sampling (Yin, 2003) as discussed in chapter three, the researcher
distributed questionnaires to 85 instructors in four purposefully selected universities. The
selection of these instructors was based on the criteria set in chapter three. Table 30 presents the
sample size for the instructors‘ samples.

175| P a g e
Table 30: Instructors‘ Sample Size
Sample University adopting e-learning Instructors’
(as supportive teaching and learning tool) sample
Arbaminch University 24
PESC Information Systems College 20
Ethiopian Civil Service University 21
Haramaya University 20
Total no. of samples 85

5.3.1.5 Respondents’ Demography


This sub-section presents the respondents demography in terms of gender and age distribution.

1. Gender
About 71% and 29% of the instructors‘ population are male and female respondents, respectively
(see Table 31).
Table 31: Gender Distribution

(Source: SPSS version 20 Output, 2018)

2. Age
The age distribution of the instructors‘ respondents shows that about 41%, 39% and 20% are in
the age ranges of 18-28, 28-38 and 38-48, respectively. It can be understood from Table 32 that
the majority of the age distribution lies in the age bracket between 18-28 (see Table 32).

176| P a g e
Table 32: Age Distribution of Instructors‘ Respondents

(Source: SPSS version 20 Output, 2018)

5.3.2 Testing the Model and its Hypotheses

This section presents the findings of the model testing in terms of assessing its measurement and
structural model tests. The measurement model validates the instructors‘ questionnaire
instrument items. On the other hand, the structural model validates the model constructs‘
relationships or its hypotheses. The threshold values discussed in Chapter three were used to
evaluate the measurement and structural models over the instructors‘ samples. The data analysis
results are presented in the form of Tables and Figures.

5.3.2.1 Validating the Measurement Model

The assessment results of the reflective measurement model were evaluated based on
unidimensionality, convergent validity, discriminate validity and internal consistency reliability.
Smart PLS 3.0 was employed to eliminate those items with outer loadings of less than the
threshold values after sequence of iterative processes.

Construct Reliability tests of the measurement model were measured using Cronbach‘s Alpha
and Composite Reliabilities. Similarly, convergent validity test of the measurement model falls
within the recommended threshold value as measured in terms of Average Variance Extracted
(AVE). Table 33 presents the study‘s model constructs overall reliability and validity test results.

177| P a g e
Table 33: Reliability and Validity Test Results

Reliability and Validity tests


The Study Model Cronbach’s Composite Average Variance
Constructs Alpha Reliability Extracted (AVE)
E-learning Content Quality 0.887 0.917 0.688
E-learning Systems Outcome 0.927 0.954 0.873
E-learning Systems Quality 0.836 0.884 0.605
Institutional Support Quality 0.931 0.945 0.710
Instructors‘ e-learning factor 0.859 0.899 0.640
Learners‘ e-learning factor 0.791 0.856 0.606
Systems Usage 0.832 0.900 0.750
User satisfaction 0.881 0.927 0.808
(Source: Smart PLS Output, 2018)

Similarly, discriminant validity tests using Fornell Larcker Criterion was used to test whether the
model constructs meet the discriminant validity criteria. The Square roots of AVE values
(Diagonal values) were higher than correlation values in the off-diagonal elements of
corresponding rows and columns. Accordingly, the study‘s proposed model meets this criterion
(see Table 34).

Table 34: Discriminant Validity Test

Construct Matrix
Constructs
Constructs E-learning E-learning E-learning Institutional Instructors’ Learners’ Systems User
Content Systems Systems Support e-learning e-learning Usage Satisfaction
Quality Outcome Quality Quality factor factor
E-learning Content 0.830
Quality
E-learning Systems 0.620 0.934
Outcome
E-learning Systems 0.623 0.687 0.778
Quality
Institutional 0.513 0.680 0.739 0.843
Support Quality
Instructors‘ e- 0.614 0.728 0.750 0.702 0.800
learning factor
Learners‘ e- 0.124 0.135 0.215 0.250 0.266 0.778
learning factor
Systems Usage 0.439 0.709 0.684 0.805 0.683 0.129 0.866
User Satisfaction 0.618 0.842 0.713 0.714 0.718 0.124 0.740 0.899
(Source: Smart PLS Output, 2018)

178| P a g e
5.3.2.2 Validating the Structural Model
After validating the measurement model in terms of reliability and validity tests of the
constructs, the next step was to validate the model‘s constructs‘ relationships and hypothesis.
Accordingly, the structural model tests the predictive capabilities, path coefficients, effect size
and predictive relevance of each of the exogenous and endogenous constructs of the overall
proposed model.

The relationship among the constructs and testing the study‘s hypotheses were assessed in a
similar technique as it was performed for the students‘ samples (see Table 35 for details).

179| P a g e
Table 35: Structural Model Hypothesis Testing for Bootstrapping Direct Effect Results

Path Path
Hypothesis T-Value P values at
Direct Effect (Path) Coefficient Coefficient Std Error *Decision
(95% CIL)
(β) (β) Strength
H1 Institutional Support Quality → User Satisfaction 0.321 Moderate 0.157 2.043 0.042* Supported
H2 Institutional Support Quality → Systems Usage 0.517 Strong 0.117 4.432 0.000*** Supported
Institutional Support Quality → Instructors‟ e- 0.702 Strong 0.061 11.542 0.000***
H3 Supported
learning factor
N.S
Institutional Support Quality → E-learning 0.008 Weak 0.207 0.041 0.484
H4 Not Supported
Content Quality
***
Institutional Support Quality → E-learning 0.739 Strong 0.058 12.724 0.000
H5 Supported
Systems Quality
N.S
H6 Learners‟ e-learning factor → User Satisfaction (0.078) Weak 0.105 0.856 0.196 Not Supported
H7 Learners‟ e-learning factor → Systems Usage (0.09) Weak 0.071 1.102 0.135N.S Not Supported
E-learning Systems Quality → E-learning 0.368 Moderate 0.177 2.080 0.019*
H8 Supported
Content Quality
N.S
E-learning Systems Quality → Users‟ 0.175 Weak 0.160 1.094 0.137
H9 Not Supported
Satisfaction
N.S
H10 E-learning Systems Quality → Systems Usage 0.078 Weak 0.122 0.640 0.261 Not Supported
H11 E-learning Content Quality → User Satisfaction 0.191 Weak 0.081 2.362 0.009 ** Supported
H12 E-learning Content Quality → Systems Usage (0.150) Weak 0.087 1.732 0.042 * Supported
Instructors‟ e-learning factor → E-learning 0.332 Moderate 0.141 2.345 0.010 **
H13 Supported
Content Quality
Instructors‟ e-learning factor → User 0.268 Moderate 0.136 1.967 0.025 *
H14 Supported
Satisfaction
N.S
H15 Instructors‟ e-learning factor → Systems Usage 0.155 Weak 0.133 1.159 0.123 Not Supported
H16 User Satisfaction → Systems Usage 0.306 Moderate 0.107 2.853 0.002 ** Supported
H17 User Satisfaction →E-learning Systems Outcome 0.700 Strong 0.096 7.276 0.000*** Supported
H18 Systems Usage → E-learning Systems Outcome 0.191 Weak 0.121 1.586 0.057 N.S Not Supported

o *Significant at level 0.05; **Significant at level 0.01, ***Significant at level 0.001; N.S not significant
o Path coefficients(β) Strength : 0.2 (Weak); Values between 0.2 and 0.5 is (Moderate); and more than 0.50 is (Strong)
Decision: 11 out of 18 hypothesis get supported

180| P a g e
Finally, the predictive power of the proposed model was assessed in terms of Coefficient of
Determination (R2). Table 36 presents summary of the overall proposed model R2 values.

Table 36: Summary of Coefficient of Determination (R2) values

Furthermore, Figure 52 presents the overall results of the assessment of measurement and
structural model tests of the proposed model.

(Source: Smart PLS Output, 2018)


Fig.52 The Overall Proposed Model Test over Instructors‘ Sample

181| P a g e
5.3.2.3 Discussions of Structural Model Tests and Hypothesis Testing

This sub-section describes the structural model test results as presented in Table 35, Table 36 and
depicted in Figure 52 above.

1.Institutional Support Quality

After undergoing structural model tests for this construct under the instructors‘ samples (see
Table 35 and Figure 52), the direct effects of Institutional support quality on User satisfaction,
Systems usage, Instructors‘ e-learning factor, and E-learning systems quality were assessed and
the results were (β 0.321, P 0.042), (β 0.517, P 0.000), (β 0.702, P 0.000), and (β 0.739, P 0.000),
respectively. This leads to confirming the hypotheses: (H1) ‗Institutional support quality has a
direct effect on user satisfaction‟, (H2) ‗Institutional support quality has a direct effect on
systems usage‟, H3 „Institutional support quality has a direct effect on instructors‟ e-learning
factor‟ and (H5) „Institutional support quality has a direct effect on e-learning systems quality‟.
However, Smart PLS data analysis result for the direct effect of institutional support quality on e-
learning content quality showed that (β 0.008, P 0.484). This leads to reject the hypothesis: (H4)
‗Institutional support quality has a direct effect on e-learning content quality‟.

2. Learners’ E-learning Factor


The direct effects of Learners‘ e-learning factor on user satisfaction and systems usage were
assessed and the results were (β -0.078, P 0.196) and (β -0.009, P 0.135), respectively (see Table
35 and Figure 52). Accordingly, the results of the Smart PLS analysis show that the following
two hypotheses were rejected: (H6) ‗Learners‟ e-learning factor has direct effect on User
satisfaction‟ and (H7) „Learners‟ e-learning factor has direct effect on Systems usage‟. However,
the effect size (f2) of H6 and H7 shows a value of 0.008 and 0.234, respectively, signifying that
Learners‟ e-learning factor has still small direct effect size on User satisfaction and medium
direct effect size on Systems usage, respectively.

3. E-learning Systems Quality


This construct was assessed in relation to its direct effects on e-learning content quality, user
satisfaction and systems usage. As a result, Smart PLS analysis results were (β 0.368, P 0.019), (β
0.175, P 0.137), (β 0.078, P 0.261), respectively (see Table 35 and Figure 52). The hypothesis:

182| P a g e
(H8) „E-learning Systems quality has direct effect on e-learning content quality‟ was supported.
However, the following two hypotheses were rejected: (H9) „E-learning Systems quality has
direct effect on user satisfaction‟ and (H10) „E-learning Systems quality has direct effect on
systems usage‟.

Furthermore, the Coefficient of Determination (R2) showed that 54.7 percent of the variance in e-
learning systems quality is explained by institutional support quality (see Table 36). As discussed
in chapter three with regard to the threshold values for R 2, this value of R2 is regarded as
Moderate.

4. E-learning Content Quality


The direct effects of e-learning content quality on user satisfaction and systems usage were
assessed and the results were (β 0.191, P 0.009) and (β -0.150, P 0.042), respectively (see Table
35 and Figure 52). Accordingly, the results of the Smart PLS analysis show that the following
two hypotheses were supported: (H11) „E-learning content quality has direct effect on User
satisfaction‟ and (H12) „E-learning content quality has direct effect on Systems usage‟.

Similarly, Smart PLS analysis result for the Coefficient of Determination (R2) showed that 43.7
percent of the variance in e-learning content quality is explained by institutional support quality,
e-learning systems quality, and instructors‘ e-learning factor (see Table 36). Accordingly, this
value of R2 is regarded as Moderate.

5. Instructors’ E-learning Factor


This construct was assessed in relation to its direct effects on e-learning content quality, user
satisfaction and systems usage and the Smart PLS analysis results showed (β 0.332, P 0.010), (β
0.268, P 0.025) and (β 0.155, P 0.123), respectively (see Table 35 and Figure 52). Accordingly,
the following two hypotheses were supported: (H13) „instructors‟ e-learning factor has direct
effect on e-learning content quality‟ and (H14) „instructors‟ e-learning factor has direct effect on
User satisfaction‟. However, the direct effect of instructors‘ e-learning factor on systems usage
(H15) was non-significant. Therefore, this result leads us to reject the hypothesis: (H15)
„instructors‟ e-learning factor has direct effect on Systems usage‟.

183| P a g e
Smart PLS analysis result with regard to the Coefficient of Determination (R 2) showed that 49.2
percent of the variance in instructors‘ e-learning factor is explained by institutional support
quality (see Table 36 and Figure 52). Accordingly, this value of R2 is regarded as Moderate.

6. User Satisfaction
The direct effects of User satisfaction on Systems usage and on E-learning systems outcome
were assessed and the Smart PLS analysis results show (β 0.306, P 0.002) and (β 0.700, P 0.000),
respectively (see Table 35 and Figure 52). Accordingly, the following two hypotheses were
supported: (H16) ‗User satisfaction has direct effect on systems usage‟ and (H17) „User
satisfaction has direct effect on e-learning systems outcome‟.
Smart PLS analysis result for the value of Coefficient of Determination (R 2) showed that 65.4
percent of the variance in user satisfaction can be explained by Learners‘ e-learning factor,
institutional support quality, e-learning systems quality, e-learning content quality, and
instructors‘ e-learning factor (see Table 36). Accordingly, this value of R2 is regarded as
Moderate (close to Substantial being R2 = 0.654).

7. Systems Usage

The direct effects of systems usage on e-learning systems outcome was assessed and the SMART
PLS analysis result showed that (β 0.700, P 0.000). The result leads us to support the hypothesis:
(H18) „Systems usage has direct effect on e-learning systems outcome‟ (see Table 35 and Figure
52).

Furthermore, Smart-PLS analysis result for the Coefficient of Determination (R 2) showed that
72.6 percent of the variance in systems usage can be explained by learners‘ e-learning factor,
institutional support quality, e-learning systems quality, e-learning content quality, instructors‘ e-
learning factor and user satisfaction (see Table 36). Accordingly, this value of R2 is regarded as
Substantial.

8. E-learning Systems Outcome

The value of the Coefficient of Determination (R2) for this endogenous (outcome) construct of
the study model showed that 72.5 percent of the variance in e-learning systems outcome is

184| P a g e
explained by user satisfaction and systems usage (see Table 36). Accordingly, this value of R2 is
considered as Substantial.

5.3.2.4 Effect Size and Predictive Relevance

Predictive relevance and effect size of the structural model aspects are also used to evaluate its
inner model.

1. Predictive Relevance (Q2)

Based on blindfolding procedure, the study obtains Q2 values for three criterion (outcome)
constructs: User satisfaction on Systems usage, User satisfaction on e-learning systems outcome
and systems usage on e-learning systems outcome as 0.04, 0.45 and 0.032, respectively. Based
on the recommended threshold values discussed in chapter three, the analysis results showed
sound predictive relevance of the proposed model.

2. Effect Size (f2)

The effect size values of three criterion (outcome) constructs: User satisfaction on Systems
usage, User satisfaction on e-learning systems outcome and systems usage on e-learning systems
outcome are 0.03, 0.22 and 0.02 which indicate these effect sizes values are considered as small,
medium and small, respectively.

5.4 Chapter Summary

This chapter presents the test results of the proposed model presented in chapter four. Structural
Equation Modeling (SEM-PLS) was used to test the model. Accordingly, the proposed model
was tested based on measurement and structural model tests on two levels: students and
instructors. The measurement model test analysis result in both of the two data sets (students and
instructors) shows that all the model constructs are valid and reliable to measure e-learning
systems success in the context of Ethiopian higher education institutions. Furthermore, the test
results were not only limited to measurement model tests but also to structural model tests. As a
result, the structural model test results showed that out of eighteen hypotheses/relationships,
thirteen and eleven hypotheses were supported under students‘ and instructors‘ samples,
respectively. The relationships among the model constructs were also tested in terms of path
coefficient, predictive relevance, effect size and overall predictive power. Accordingly, the

185| P a g e
analysis results showed sound predictive power of the proposed model. The next chapter will
provide discussions about the analysis results of chapter four and chapter five combined.

186| P a g e
Chapter Six
Discussion of Results

6.1 Introduction

This chapter presents discussions about the combined findings of the qualitative and quantitative
results of the study. The proposed model constructs and its relationships identified through the
qualitative phase were tested quantitatively employing Structural Equation Modeling (SEM-
PLS). Accordingly, this chapter discusses findings presented in a way it addresses the research
questions and objectives of the study. Key findings about the study are also discussed in relation
to extant literature.

6.2 E-learning Systems Success Model

E-learning is gaining more acceptance due to the wide ranges of benefits it provides to the
teaching and learning process of higher education institutions. Furthermore; due to continuous
advancements in the digital technology, different learners have different preferences in terms of
learning style such as reading text, listening audio, speaking and communication being from
anywhere and at anytime (Abdellatief et al., 2011). To this effect; there is great interest among
educators, policy makers, donors, academicians and other e-learning stakeholders to make e-
learning initiatives successful by committing resources required for such initiatives; but still,
failures exist (Ssekakubo et al., 2011). E-learning systems success depends on a complex of
socio-technical network of actors, user types, culture, technology and other contextual factors
(Becker et al., 2013; Ozkan &Koseler, 2009; Sedera et al., 2007). To address this issue, this
study formulated the following main research questions:

1. What are the factors that determine e-learning systems success in Ethiopian higher education
institutions context?

2. What kind of e-learning systems success model should be used by Ethiopian higher
education institutions in order to evaluate and sustain it e-learning systems success?

In line with these main research questions, this study identified determinant factors of e-learning
systems success and placed them in a model taking Ethiopian higher education institutions as
case studies. The model development process followed Eisenhardt‘s theory building technique

187| P a g e
(1989). In line to this theory development process, an exploratory multiple-case study was
employed to identify factors that determine e-learning systems success and, latter, placed them in
the model. Open ended in-depth personal interviews, focus group discussions, document
reviews, and observations were used as data collection methods to triangulate the study findings.
Theoretical sampling method was used to select nine Ethiopian higher education institutions
adopting e-learning systems. Similarly, respondents selected for the in-depth personal interviews
and focus group discussions were purposefully selected via the same sampling method.
Observation and relevant documents such as annual reports, quarterly reports, e-learning
policies, e-learning proposals and other related e-learning documents from the institutions under
the study were collected to triangulate the findings of the study. Data collection and analysis
procedures were performed iteratively until no new factors or concepts emerged and further data
collection and analysis procedures resulted in the repetition of themes. Accordingly, thirty-one
lower level and eight higher level categories were identified as determinant factors of e-learning
systems success. The identified factors were placed in a model based on the qualitative empirical
findings. Eighteen hypotheses were formulated to validate the relationships between the
constructs of the proposed model. Since Eisenhardt‘s theory development procedure lacks
profound validation techniques (Eisenhardt, 1989), the researcher developed survey items based
on the lower-level categories and their respective higher level categories of the proposed model
as identified from the qualitative empirical findings. Consequently, two different types of
questionnaires (i.e. for students and for instructors) were prepared and pilot tested before finally
distributing the questionnaires in order to test the model. A total of one hundred eighty one
questionnaires (i.e. ninety six for students and eight five for instructors) were distributed to test
the model via SEM-PLS.

The following sections dwell around the model and its constructs to map the research questions
and objective of the study.

6.2.1 Institutional Support Quality

Institutional support quality was identified as one of the determinant factors of e-learning
systems success. The qualitative data analysis result shows that this model‘s construct is
expressed in terms of Technical support, Quality assurance Mechanism, Motivation, IT training,
Intellectual property protection, E-learning policy and Availability of e-learning infrastructure.

188| P a g e
Smart PLS software package test results from the measurement model of the institutional support
quality also show that the construct items are valid and reliable to measure this construct. Similar
other studies also confirmed this construct as a valid measure of e-learning systems success (Oye
et al., 2011; Ozkan & Koseler, 2009; Selim, 2007; Khan, 2005). DeLone and McLean (2003)
have termed the construct ―Service quality‖, which was also identified as this study‘s seed
construct, was expressed in terms of the quality of technical support staff services provided to
users of IS, more specifically, by the IT department or an outsourced IT company. According to
the study findings, the service quality construct of the ISS model was found out to be too narrow
to address the overall institutional support quality issues. Without having structured e-learning
policy, protecting instructors‘ intellectual property, motivating instructors and staff involved in
e-learning implementation as well as encouraging students to use e-learning, devising quality
assurance mechanism, providing consistent computer training to help users effectively use the e-
learning system and availing e-learning infrastructure, the quality of the IT department services
by itself might not be adequate to satisfy the support service needs of students and instructors. In
fact, lack of inclusion of the overall institutional support quality issues and its associated
contextual factors was cited as some of the limitations of the ISS model (Dorbat, 2014). To this
end, this study has come-up with this new construct ―institutional support quality‖ determining e-
learning systems success.

The qualitative data analysis result further shows that institutional support quality has effects on
user satisfaction, systems usage, instructors‘ e-learning factor, e-learning content quality and e-
learning systems quality constructs.

Accordingly, the following five hypotheses were formulated to examine these five relationships:

H1: Institutional support quality has a direct effect on user satisfaction

H2: Institutional support quality has a direct effect on systems usage

H3: Institutional support quality has a direct effect on instructors‟ e-learning factor

H4: Institutional support quality has a direct effect on e-learning content quality

H5: Institutional support quality has a direct effect on e-learning systems quality

189| P a g e
The structural model test results of the direct effects of institutional support quality on these five
constructs are discussed as follows:

The results of testing these five hypotheses were all supported in both the students‘ and the
instructors‘ samples except for H4 under the instructors‘ samples. The qualitative data analysis
result did also show that instructors‘ motivation in terms of financial and non-financial rewards,
computer training received, prompt technical assistance from concerned e-learning technical
staff, protection of their intellectual property and readily availability of e-learning infrastructure
were some of the most cited institutional support quality factors that instructors associate with
continual use of the systems and their satisfactions of using the system. Similar other studies also
confirmed these direct effect relationships among the constructs (Oluniyi and Taiwo, 2016;
Ozkan & Koseler, 2009; Wang & Wang, 2009; Masrom et al., 2008; Samarawickrema & Stacy,
2007). Similarly, students associate their level of systems satisfaction with the degree of
institutional support quality factors expressed in terms of computer training received, prompt
technical assistance while they encountered systems usability problems, readily availability of e-
learning infrastructure, and having an institution that works towards improving the e-learning
systems in satisfying their needs. Furthermore, the institutions commitment towards investing
and availing up-to-date LMS versions which has useful system features, interactive and a system
which is also responsive all determine the e-learning systems quality. Similarly, the degree of
institutional support quality determines the degree of e-learning content quality uploaded over
the LMS for students (i.e. in terms of course quality, up-to-date and relevant learning materials).
Thus, any shortfall in these institutional support quality factors can negatively affect students‘
and instructors‘ satisfaction and the value that could be gained from the system.

However, the non-significant direct effects of institutional support quality on e-learning content
quality with the case of the instructors‘ samples might be explained (i.e. based on the qualitative
empirical findings) with the fact that some types of courses might be difficult to be delivered via
e-learning systems which might further affect the nature of the relationship that would exist in
spite of the quality of institutional support quality provided to instructors. Furthermore, the low
level of e-learning systems usage among the instructors might also contribute towards this
negative response of not clearly identifying the relationship between these constructs. This
justification is also supported by Khan (2005).

190| P a g e
6.2.2 Learners’ E-Learning Factor

The study findings show that learners‘ e-learning factor is one of the determinants of e-learning
systems success. The study‘s qualitative data analysis result showed that this construct is
expressed in terms of computer self-efficacy and attitude towards e-learning. Smart PLS
software package test results with respect to the measurement model of the institutional support
quality also showed that the construct items are valid and reliable to measure the construct.
Similar other studies also confirmed these items of learners‘ e-learning factor determining e-
learning systems success (Bindhu & Manohar, 2015; Saade & Kira 2007; Wolfinbarger et al.,
2005; Holcomb et al., 2003). In contrast, the ISS model did not consider this construct as one of
the IS success constructs. Other studies critics the ISS model for not accounting user‘s factor as
one of the critical ISS success measures (Eom, 2011). Accordingly, against to the DeLone &
McLean (2003) model and in favor of Bhuasiri et al. (2012), Chu & Chu (2010), Sun et al.
(2008), Arbaugh (2002), Arbaugh & Duray (2002), Hong (2002), Piccoli et al. (2001), this study
has come-up with this new construct ―Learners‘ e-learning factor‖ determining e-learning
systems success.

Furthermore, the qualitative data analysis result shows that Learners‘ e-learning factor has
effects on user satisfaction and systems usage constructs. Accordingly, the following two
hypotheses were formulated to examine these relationships:

H6: Learners‟ e-learning factor has direct effect on User satisfaction

H7: Learners‟ e-learning factor has direct effect on Systems usage

Employing Smart PLS software package, the structural model tests show that these two
hypotheses were not supported under both students‘ and instructors‘ samples. Based on the
findings of the qualitative phase of the study, the major reason attributed to these test results of
our study is due to the fact that e-learning systems is only used as supportive tool in the teaching
and learning process. Therefore, such voluntary use of the system may have negative influence
on such type of direct effect relationship in which students and instructors are not using it
mandatorily. To this end; they might not fully understand about the pre-requisites or factors
(such as this construct) accounting for continual e-learning systems usage and user satisfaction,
as a result this might negatively affect their response. Lwoga (2014) also revealed similar

191| P a g e
justifications in Tanzanian University where e-learning system was just introduced at the
university being adopted voluntarily.

6.2.3 E-Learning Systems Quality

E-learning systems quality was identified as one of the determinant factors of e-learning systems
success. The qualitative data analysis result shows that this model‘s construct is measured in
terms of systems useful features, systems availability, systems ease of use, systems interactivity,
systems response, user friendly and security. Smart PLS software package test results of the
measurement model for the e-learning systems quality construct also show that the construct
items are valid and reliable while measuring this construct. DeLone and McLean (2003) and
other e-learning studies such as Wu & Zhang (2014), Islam (2011), Ozkan & Koseler (2009), and
Lin (2007) also reported similar findings.

Although e-learning systems quality and e-learning content quality constructs emerged out of the
study‘s qualitative phase of the study, they are similar to the updated ISS model constructs (i.e.
similar to systems quality and information quality constructs). However, DeLone and Mcleane
(2003) did not test the relationship between these two constructs forwarding it as potential areas
of future research. To this end, this study adds to the current knowledge of IS based on EHEIs
context by testing the relationship between these two constructs on two levels.

The study‘s qualitative data analysis result shows that e-learning systems quality has effects on
e-learning content quality, user satisfaction, and systems usage constructs. Accordingly, the
following three hypotheses were formulated to examine these relationships:

H8: E-learning systems quality has direct effect on e-learning content quality

H9: E-learning systems quality has direct effect on user satisfaction

H10: E-learning systems quality has direct effect on systems usage

The structural model test results of the direct effects of e-learning systems quality on these three
constructs are discussed as follows:

Although the results of testing these three hypotheses were all supported by the students‘ sample,
only H8 was supported under the instructors‘ samples. The qualitative data and SMART PLS

192| P a g e
analysis results did show that the systems useful features determine course quality whenever the
system has several plug-in functions such as video lectures, interactive features, and other useful
system features to accommodate various teaching aids in order to enhance the quality of the e-
contents of a course. Similarly; systems responsiveness, security, interactivity, availability, ease
of use and user friendly determine students‘ user satisfaction and further use of the system. This
is also supported in other similar studies (Mohammadi, 2015; Park et al., 2011; Freeze et al.,
2010; Gorla et al., 2010; Floropoulos et al., 2010).

On the other hand, H9 and H10 were not supported under the instructors‘ samples. From the
qualitative phase of the study‘s findings, this is attributed to the fact that instructors of the
institutions under study are neither motivated nor mandatorily use e-learning systems for
teaching and also they use e-learning systems to only supplement the teaching and learning
process i.e. only when they are too busy to handle the regular face-to-face class modality. To this
end; what accounts for e-learning systems quality (also being multi-factorial construct) might not
be fully understood by the instructors due to their low level of use of the system for teaching and
as a result their responses with regard to its significant effect on user satisfaction and systems
usage might be negatively affected. These explanations of the findings for the non-significant
direct effects between these constructs are also similar to other studies such as in Jagannathan et
al. (2018), Selim (2007) and Iivari (2005) in an e-learning environment with low-level of e-
learning systems usage.

Furthermore, the Coefficient of Determination (R2) showed that the variance in e-learning
systems quality was caused by a moderate predictive power from institutional support quality at
two levels: students and instructors. This is also evidenced by the qualitative data analysis results
in that the institutions under the study do not give much attention towards updating the LMS to
its current version and also problems related to its slower systems response, less degree of
systems functionality (i.e. the LMS useful features are not functional whenever students and
instructors want to use it at any time from any place). These all factors were directed at the role
of institutional support quality in causing significant variance towards the e-learning systems
quality construct.

193| P a g e
6.2.4 E-learning Content Quality

The study‘s qualitative empirical result shows that this construct is cited as one of the
determinants of e-learning systems success. It was measured in terms of course quality, up-to-
date and relevant content. Smart PLS software package test results from the measurement model
of the e-learning content quality construct also show that the construct items are valid and
reliable to measure this construct. Yakubu et al. (2018), Bindhu & Manohar (2015), Bhuasiri et
al., (2012) and Ozkan & Koseler (2009) also confirmed these measures of e-learning content
quality.

DeLone and McLean (2003; 1992) also found out up-to-date and relevant content as important
measures of information quality. However, the researcher re-named the construct information
quality in the ISS model (which was also identified as the seed construct of this study) as e-
learning content quality in the e-learning systems context based on the qualitative empirical
findings. This re-naming of the construct was also based on the repeated use of this word among
the study‘s respondents and also recommendations forwarded by the e-learning experts‘ focus
group discussions and final reviews of the transcribed notes (i.e. feedback received from the case
study draft report). Apart from these modifications, the study findings with regard to measures of
this construct are found out to be similar with measures of Information Quality construct of the
updated ISS model.

The qualitative data analysis result further shows that e-learning content quality has impact on
user satisfaction and on systems usage constructs. Accordingly, the following two hypotheses
were formulated to examine these relationships:

H11: E-learning content quality has direct effect on User satisfaction

H12: E-learning content quality has direct effect on Systems usage

Smart PLS software package‘s structural model test results of the direct effects of e-learning
content quality on these two constructs are discussed as follows:

The test results of these two hypotheses (i.e. H11 and H12) deviated in two directions: confirmed
by the instructors‘ samples and rejected by the students‘ samples. According to the instructors‘
samples, e-learning content quality directly impacts both user satisfaction and systems usage

194| P a g e
constructs. Other similar studies also confirmed these significant direct effect relationships
(Hassanzadeh et al., 2012; DeLone and Mclean, 2003).

However, these hypotheses were not supported by the students‘ samples. The non-significant
direct effects of e-learning content quality on user satisfaction and on systems usage constructs
may be explained by the fact that since the instructors of the institutions under study do not
usually update the quality of their e-learning contents and e-learning being used as an optional
educational delivery modality (i.e. not in a mandatory basis) , students‘ might not be in a better
position to evaluate what constitutes e-learning content quality (also being multi-factorial
construct) or from appreciating the quality of the e-learning content available on the LMS which
may affect their responses. This finding is similar to the findings of Lwoga (2014). Lwoga also
attributed this to the participants‘ moderate experience of using the system, and low level of e-
learning systems diffusion across higher education institutions. Ozkan & Koseler (2009), Sun et
al. (2008), Selim (2007), Holsapple & Lee-Post (2006) and Khan (2005) also mentioned that one
of the challenges of e-learning is that students do not usually perceive the benefits of e-learning
systems whenever higher education institutions are less committed towards implementing the
system.

Furthermore; the Coefficient of Determination (R2) showed that the variance in e-learning
content quality was caused by a moderate predictive power as a result of the combined effects of
institutional support quality, e-learning systems quality, and instructors‘ e-learning factor
constructs. This is also evidenced by the qualitative data analysis findings implying that the
institutions under study should arrange e-learning studios and other e-learning teaching facilities
for instructors that would help them enhance the e-learning content quality such as through the
use of videos, animations, and other related teaching aids depending on the course nature.
Shortfalls in any of the institutional support quality factors would negatively affect the e-learning
content quality. Similarly, the systems available features of the LMS also determine the degree
of e-learning content quality. For example, a Learning Management System (LMS) which has all
the plug-in functions such as for videos, audios, animations, and online exercises have significant
predictive power to determine the degree of e-learning content quality. Instructors‘ e-learning
factor such as attitude towards e-learning and computer self-efficacy has a significant predictive

195| P a g e
power in terms of enhancing the course quality and regularly updating the e-learning content in
such a way that students would easily understand the course.

6.2.5 Instructors’ E-learning Factor

Both the qualitative and the quantitative data analysis results show that this construct, as being
measured in terms of computer self-efficacy of the instructor, creating e-learning environment,
timely response and instructors‘ attitude towards e-learning, is valid and reliable measure of e-
learning systems success. Other several studies also supported this construct as one of the
determinants of e-learning systems success (Bhusari et al., 2012; Chen, 2010; Demirkan et al.,
2010; Gunn, 2010; Ozkan & Koseler, 2009; Wan & Wang, 2009; Saleh, 2008; Lee & Lee, 2008;
Liaw et al., 2007; Wang et al, 2007). One of the critiques of the updated ISS model was the
model did not include the user‘s factor. The study findings extended the updated ISS model by
introducing this new construct: Instructors‘ e-learning factor.

The qualitative data analysis result further shows that instructors‘ e-learning factor has impact on
e-learning content quality, user satisfaction and systems usage constructs. Accordingly, the
following three hypotheses were formulated to examine these three relationships:

H13: Instructors‟ e-learning factor has direct effect on e-learning content quality

H14: Instructors‟ e-learning factor has direct effect on User satisfaction

H15: Instructors‟ e-learning factor has direct effect on Systems usage

Smart PLS software package‘s structural model test results of the direct effects of instructors‘ e-
learning factor on these three constructs are discussed as follows:

The direct effect of instructors‘ e-learning factor on e-learning content quality (H13) was
supported by both the instructors‘ and the students‘ samples. The study‘s qualitative data
analysis results also showed that instructors‘ computer self-efficacy, creating e-learning
environment and attitude towards e-learning determine the ability and commitment of the
instructor to develop quality course materials, continually update the teaching materials and
upload relevant contents. These justifications and test results are also consistent with other

196| P a g e
similar studies (Lwoga, 2014; Eom, 2011; Ozkan & Koseler, 2009; Sun et al., 2008; Holsapple &
Lee-Post, 2006).

The test results of H14 deviated in two directions: supported by the instructors‘ samples and
rejected by the students‘ samples.

Regarding instructors‘ samples, instructors feel that their level of computer self-efficacy, timely
response to students, creating e-learning environment and attitude towards e-learning system
determine their level of satisfaction while using the system for teaching. Since instructors of the
institutions under the study use e-learning on voluntarily basis, instructors‘ e-learning factor
significantly determines their level of satisfaction with the system as use of the system is self-
initiated (voluntarily). This finding is also supported by Junkins (2011), Gunn (2010) and
Alexander (2001).

On the other hand, the students‘ samples which shows that there is non-significant direct effect
of instructors‘ e-learning factor on user satisfaction can be explained by the result of the
qualitative empirical findings of the study in that students‘ considered it is one of the duties of
the instructor to use e-learning for teaching. However, the fact is not. Instructors, under the
context of this study, are neither motivated nor insisted to use e-learning systems for teaching.
To this effect, this might negatively influence students‘ responses as in contrast to the qualitative
study findings. This finding is in contrary to other studies which concluded that there is a
significant direct effect between these constructs in a mandatory setting (Junkins, 2011; Gunn,
2010; Sun et al., 2008).

The test results of H15 also deviated in two directions: supported by students‘ samples and
rejected by instructors‘ samples.

This hypothesis is supported by the students‘ samples. Similar other studies also supported this
significant direct effect of instructors‘ e-learning factor on students‘ systems usages (Saade &
Kira, 2009; Saleh, 2008; Liaw et al., 2007; Sun et al., 2008; Compeau & Higgins, 1995).

Regarding instructors‘ samples, this hypothesis was rejected. Some of the factors that could
contribute towards the non-significant direct effect of instructors‘ e-learning factor on systems
usage might also be explained due to the fact that the instructors are neither motivated nor

197| P a g e
mandatorily use the system. Furthermore; the qualitative empirical findings show that some
courses are also not appropriate to run through e-learning systems which might influenced the
negative response of the instructors (i.e. also due to the multi-factorial features of instructors‘ e-
learning factor and systems usage constructs). To this end, the result of the value of coefficient of
determination (R2) also indicated that the systems usage construct is significantly affected by the
cumulative effects of the six exogenous constructs of the model (i.e. it is not only affected by the
instructors‘ e-learning factor). These findings and justifications are also in line with Selim (2007)
and Khan (2005).

Overall, the Coefficient of Determination (R2) showed that the variance in instructors‘ e-learning
factor was caused by a weak predictive power under the students‘ samples and a moderate
predictive power under the instructors‘ samples by the institutional support quality construct.
From the qualitative data analysis result, it can be understood that instructors‘ motivation either
in terms of financial or non- financial incentives; e-learning policy which dictates instructors‘ to
use e-learning depending on the course nature; computer and e-learning training to enhance the
skill or computer self-efficacy of instructors; availability of e-learning infrastructure to
continually use e-learning at any time and place; prompt technical assistance whenever
instructors‘ face difficulties; and protecting instructors‘ intellectual property to guarantee their
protection of teaching materials to continually use e-learning for teaching all determine the
predictive power of institutional support quality on instructors‘ e-learning factor.

6.2.6 User Satisfaction

The study‘s qualitative and quantitative data analyses results found out that the user satisfaction
construct is a valid and reliable measure of e-learning systems success. The ISS model also
regarded this construct as one of the determinants of information system success (DeLone and
McLean, 2003). Measures of this construct: enjoyable experience and overall satisfaction with
the system were also found out to be valid measures of the construct under the context of this
study which is also in line with the updated ISS model and other similar studies such as Lwoga
(2014), Freeze et al. (2010) and Lin (2007).

198| P a g e
The qualitative data analysis result further shows that user satisfaction has impact on systems
usage and e-learning systems outcome constructs. Accordingly, the following two hypotheses
were formulated to examine these relationships:

H16: User satisfaction has direct effect on systems usage

H17: User satisfaction has direct effect on e-learning systems outcome

Smart PLS software package‘s structural model test results of the direct effects of user
satisfaction on these two constructs are discussed as follows:

The results of testing these two hypotheses were supported on two levels: students‘ samples and
instructors‘ samples.

The test results of H16 indicated that user satisfaction under both the instructors‘ and the
students‘ samples enhance their continual use of the systems for teaching and learning,
respectively. This is also supported by other similar studies (Yakubu et al., 2018; Mohammadi,
2015; Lwoga, 2014; Lin and Wang, 2012; Wang and Chiu, 2011; Lee, 2010; DeLone and
McLean, 2003). The study‘s qualitative data analysis findings also show that the more instructors
and students are satisfied with the overall function of the system, the more they continually use
the system.

Similarly, the test results of H17 also show that students‘ and instructors‘ user satisfaction with
the system determines their level of performances in the teaching and learning process. Similar
other studies also confirmed this significant direct effect relationship (Ramayah & Lee, 2012;
Freeze et al., 2010; Larsen et al., 2009; Shee & Wang, 2008; Lee &Lee, 2008). However, this
test result was not supported in a study by Eom (2011). The non-significant direct effect
relationship reported by Eom (2011) was explained by the mandatory nature of the e-learning
system used in his study context. To this end, our finding in comparison to these studies further
proved that the updated ISS model in general and e-learning systems success factors in particular
are context dependent.

The Coefficient of Determination (R2) showed that the variance in user satisfaction was caused
by a substantial predictive power under the students‘ samples and a moderate predictive power
under the instructors‘ samples by the combined effects of five constructs namely: learners‘ e-

199| P a g e
learning factor, institutional support quality, e-learning systems quality, e-learning content
quality, and instructors‘ e-learning factor. From the qualitative data analysis result it can be
understood that students‘ e-learning factor as expressed in terms of their attitude towards
computer and e-learning; consistent trainings provided in computer and e-learning technologies;
prompt technical assistance from technical staff while students‘ face difficulties with use of the
e-learning system; the response time of the LMS; availability of useful systems features; ease of
use of the system; user friendly nature of the LMS; the degree of security of the e-learning
content; the course quality as presented in terms of various teaching aids such as through videos,
animations, pictures, diagrams, up-to-date and relevant e-learning content; instructors‘ timely
response over the e-learning portal and creating an e-learning environment for students to
interact at any time and place all determine user satisfaction. Similarly; institutional support
quality factors with regard to the types of motivations provided; the LMS‘s ease of use, user
friendly and overall e-learning systems quality; students‘ and instructors‘ computer self-efficacy
and their attitude towards e-learning all determine the degree of user satisfaction from the
instructors‘ point of views. These findings also correspond to the works of Yakubu et al. (2018),
Lwoga (2014), Cheng (2012), and Chen (2010).

6.2.7 Systems Usage

According to the study‘s qualitative and quantitative data analyses results; the systems usage
construct is found out to be a valid and reliable measure of e-learning systems success. The ISS
model also regarded this construct as one of the elements of information systems success
(DeLone and McLean, 2003). Smart PLS software package test result of the measurement model
also shows that Frequency of use and dependability were identified as valid and reliable
measures of this construct. These determinants of the construct are also confirmed in other
similar types of studies (Apparicio et al., 2016b; Mohammadi, 2015; Hassanzadeh et al., 2012;
Lin and Wang, 2012; Freeze eta l., 2010; Wang & Liao, 2008; Ozkan & Koseler, 2009; Wang et
al., 2007)

The qualitative data analysis result further shows that systems usage has impact on e-learning
systems outcome. Accordingly, the following hypothesis was formulated to examine this direct
effect:

200| P a g e
H18: Systems usage has direct effect on e-learning systems outcome

According to Smart PLS software package, structural model test results of the direct effect of
systems usage on e-learning systems outcome construct is discussed as follows:

The result of testing this hypothesis deviated in two directions: supported by the students‘
samples and rejected by the instructors‘ samples. Regarding to the students‘ samples, students
use e-learning systems as far as use of the system brought about enhanced academic
performance, saves their time and various associated costs related to accessing their course. This
is also supported by Freeze et al. (2010), Holsapple & Lee-Post (2006) and DeLone & McLean
(2003).

On the other hand, this hypothesis was not supported by the instructors‘ samples. From the
qualitative empirical findings, it was found out that instructors feel that there are also other issues
that should be considered to enhance their teaching performances such as motivations provided
to instructors and other related institutional supports. Several other studies also indicated that
instructors‘ teaching performance is subject to multidimensional factors (Bhuasiri et al., 2012;
Ozkan and Koseler, 2009; Wang et al., 2007; Khan, 2005; Zhang & Nunamaker, 2003).
Moreover, the System use construct is the pivot of the updated ISS model signifying that the
construct exhibits multi-factorial features (Petter et al., 2008). To this end, such negative findings
might be influenced due to this multidimensional nature of the systems usage construct.
Furthermore, the instructors have low-level of use of e-learning systems, and it could be possible
that they were not able to decide on the relevance of continued systems usage on the e-learning
system outcome which might affect their response.

The Coefficient of Determination (R2) showed that the variance in systems usage was caused by
a moderate predictive power under the students‘ samples and a substantial predictive power
under the instructors‘ samples through the combined effects of six constructs: learners‘ e-
learning factor, institutional support quality, e-learning systems quality, e-learning content
quality, instructors‘ e-learning factor and user satisfaction. From the qualitative data analysis
result, it can be understood that students‘ e-learning factor as expressed in terms of computer
self-efficacy and attitude towards e-learning; trainings provided in computer and e-learning; the
continuous improvement in e-learning systems technologies; prompt technical assistance from

201| P a g e
technical staff; the timely response of the LMS and its availability of useful systems features,
ease of use of the system, user friendly nature of the LMS, security issues related with the e-
learning content; the course quality as presented in terms of various learning aids through videos,
animations, pictures, diagrams, up-to-date and relevant e-learning content, and overall students‘
satisfaction with use of the system all determine the Systems usage construct. Similarly,
institutional support quality factors, and the ability of e-learning system to enhance the course
quality, and overall instructor‘s satisfaction with the use of the system for teaching all determine
the Systems usage construct. Moreover, the Smart PLS analysis results for predictive relevance
(Q2) and effect sizes (f2) values of user satisfaction constructs on systems usage further show
sound predictive relevance and effect sizes under both instructors‘ and students‘ samples.

6.2.8 E-learning Systems Outcome

Results of the study‘s qualitative and quantitative data analyses show that this construct, which
was initially identified as one of the seed constructs of the study as Net benefits, is found out to
be a valid and reliable measure of e-learning systems success. Furthermore, measures of this
construct were identified as time saving, cost saving to both students and instructors; and
enhanced academic performance to students and teaching performance to instructors. These
measures of the construct are in line with the updated ISS model and other similar e-learning
studies (Larsen et .al, 2009; McPherson and Nunes, 2008; Wang et al., 2007; Roca et al., 2006).
DeLone and McLean (2003) termed this construct as ―Net benefits‖.

Under the context of this study, the term ―Net benefits‖ construct was re-named as ―e-learning
systems outcome‖ due to the confusing term of the Net Benefits construct as reported by the
study‘s respondents. This was also done in accordance with the e-learning experts‘ feedback
received while validating the draft case study report. Eom (2011) also conceptualized the Net
benefits construct of the updated DeLone and McLean model in the e-learning context as ―E-
learning outcome‖ and also as ―Systems Success‖ by Freeze et al. (2010). In fact, DeLone and
McLean also considered the term ‖Systems outcome‖ than the using the term ―Net systems
benefit‖ stating that net systems benefit captures the positive and negative net effect of the
systems outcome. Similarly, this study conceptualizes and retains the positive and negative e-e-
learning systems outcome (i.e. net effect) concept as recommended by the updated ISS model
(DeLone and McLean model, 2003).

202| P a g e
Furthermore, the Coefficient of Determination (R2) showed that the variance in e-learning
systems outcome was caused by a substantial predictive power under both students‘ samples and
instructors‘ samples through the combined effects of two constructs: user satisfaction and
systems usage. From the qualitative data analysis result, it can be understood that students and
instructors would continually use and depend on the system as far as use of the system would
save their time and their associated costs, enhance teaching performance for instructors, and
improve academic performance for students. Furthermore, the overall enjoyable experience and
satisfaction with use of system determine the e-learning systems outcome. Any shortfalls in these
independent factors would negatively affect the ultimate e-learning systems outcome. Moreover,
the Smart PLS analysis results for predictive relevance (Q 2) and effect sizes (f2) values of
systems usage and user satisfaction constructs on e-learning systems outcome further show
sound predictive relevance and relationship sizes under both instructors‘ and students‘ samples.

Table 37 provides summary of the identified and validated measures the study‘s model
constructs based on students‘ and instructors‘ samples.

203| P a g e
Table 37: Summary of the Model‘s Measures of the Constructs

Construct Construct Measures: Under Instructors Construct Measures: Under Students


Technical Support Technical Support
Quality Assurance Mechanism Quality Assurance Mechanism,
Intellectual Property Protection *NA
Institutional
Motivation *NA
Support Quality
Computer Training Computer Training
E-learning Policy E-learning Policy
Availability of E-learning Infrastructure Availability of E-learning Infrastructure
Ease of Use Ease of Use
User Friendly User Friendly
Systems Interactivity Systems Interactivity
E-Learning
Systems Response Systems Response
Systems Quality
Systems Useful Features Systems Useful Features
Systems Availability Systems Availability
Secure Secure
Course Quality Course Quality
E-Learning
Relevant Content Relevant Content
Content Quality
Up-To-Date Up-To-Date
Computer Self-Efficacy Computer Self-Efficacy
Instructors’
Attitude Towards E-Learning Attitude Towards E-Learning
E-Learning
Timely Response Timely Response
Factor
Creating e-learning environment Creating e-learning environment
Learners’ Computer Self-Efficacy Computer Self-Efficacy
E-Learning
Attitude Towards E-Learning Attitude Towards E-Learning
Factor
Frequency of Use Frequency of Use
Systems Usage
Dependability Dependability
Enjoyable Experience Enjoyable Experience
User Satisfaction
Overall User Satisfaction Overall User Satisfaction
Time Savings Time Savings
E-Learning Cost Saving Cost Saving
Systems Outcome Teaching Performance *NA
*NA Academic Performance
*NA- signifies that the measures are not applicable.

204| P a g e
Table 38 provides overall summary of the structural model tests based on students‘ and
instructors‘ samples.

Table 38: Summary of the Model‘s Hypotheses/Outputs of the Path Model under Each Tested Samples
(i.e. Students‘ And Instructors‘ Samples)

Decision under each types of


Constructs Hypothesis Direct Effect Stakeholders
Codes Student Instructor

H1 Institutional Support Quality→ User Supported Supported


Institutional Satisfaction
Support H2 Institutional Support Quality→ Systems Supported Supported
Quality Usage
H3 Institutional Support Quality → Instructors‟ Supported Supported
e-learning factor
H4 Institutional Support Quality → E-learning Supported Not Supported
Content Quality
H5 Institutional Support Quality → E-learning Supported Supported
Systems Quality
Learners e- H6 Learners e-learning factor → User Not Supported Not Supported
learning Satisfaction
factor H7 Learners e-learning factor → Systems Usage Not Supported Not Supported
E-learning H8 E-learning Systems Quality → E-learning Supported Supported
Systems Content Quality
Quality H9 E-learning Systems Quality → Users‟ Supported Not Supported
Satisfaction
H10 E-learning Systems Quality → Systems Usage Supported Not Supported
E-learning H11 E-learning Content Quality → User Not Supported Supported
Content Satisfaction
Quality H12 E-learning Content Quality → Systems Usage Not Supported Supported
Instructors’ e- H13 Instructors‟ e-learning factor → E-learning Supported Supported
learning Content Quality
factor
H14 Instructors‟ e-learning factor → User Not Supported Supported
Satisfaction
H15 Instructors‟ e-learning factor → Systems Supported Not Supported
Usage
User H16 User Satisfaction → Systems Usage Supported Supported
Satisfaction H17 User Satisfaction →E-learning Systems Supported Supported
Outcome
Systems H18 Systems Usage →E-learning Systems Supported Not Supported
Usage Outcome

205| P a g e
6.3 Chapter Summary
Chapter six presented the discussion of results of chapter four and five. The lower level
categories and higher level categories (constructs) identified in chapter four were further tested
by use of Structural Equation Modeling (SEM) technique employing Smart PLS version 3.
Measures of the constructs were tested by using the measurement models. It was discussed that
the measures of each construct were found to be valid and reliable measures of e-learning
systems success. The test was not only limited to the measurement model, the relationships
between the constructs of the proposed model were further tested using structural model tests
guided by the hypotheses proposed in chapter four of the qualitative empirical findings. The
results of the hypotheses under both the students‘ samples and the instructors‘ samples were
discussed in relation to relevant literature in information systems studies, in general and e-
learning systems studies, in particular.

206| P a g e
Chapter Seven
Conclusion & Implications

7.1 Introduction

This chapter presents summary of key findings, conclusion about the study, the study‘s
contributions to theory and practice; and implications for future research. The conclusion based
on the study‘s key findings is presented in relation to meeting its research objectives. Based on
the study‘s findings, its theoretical and practical contributions are presented. At the end of the
chapter, limitations and implications for future research are forwarded.

7.2 Summary of Key Findings

This study deals with developing e-learning systems success model. Ethiopian higher education
institutions adopting e-learning systems were purposefully selected in a way that would achieve
the study‘s objectives. Accordingly, this study was conducted to achieve two major objectives:

The study’s objective #1: To identify factors that determine e-learning systems success in
Ethiopian higher educational institutions context.

This study‘s objective was achieved using two stages of research work. The first stage was done
by reviewing relevant literature. Accordingly, six seed constructs: systems quality, information
quality, service quality, systems usage, user satisfaction, and net benefits were taken from the
updated ISS model (DeLone &McLean, 2003). In the second stage: adopting Eisenhardt‘s (1989)
theory building from case study research technique, these seed constructs were used to categorize
relevant constructs serving as ‗scaffolds‘. Accordingly, nine Ethiopian higher education
institutions from both public and private higher education institutions were selected based on
theoretical sampling method (Yin, 2003; Eisenhardt, 1989). To this end, multiple case study was
employed to enhance its external validity of the study findings. Multiple data collection methods
(i.e. focus group discussions, in-depth personal interviews, observation and document reviews)
were used to triangulate the findings. Assisted by NVivo Pro 11 qualitative data analysis tool,
eight higher level categories (constructs) and 31 lower level (sub-categories) categories were
identified from the qualitative phase of the study grounded with extant literature as factors that
determine e-learning systems success. Five out of the eight higher level categories of the model
that measure e-learning systems success related to the updated ISS model were: e-learning

207| P a g e
systems quality, e-learning content quality, systems usage, user satisfaction and e-learning
systems outcome. However, most of the sub-categories of these five higher level categories
related to the updated ISS model are quite different from the ISS model‘s construct measures
(Table 39 presents the proposed model constructs in comparison to the study‘s seed
constructs/prior constructs). On the other hand, three new constructs (i.e. Learners‘ e-learning
factor, Instructors‘ e-learning factor and Institutional support quality) emerged from this study.
Although the construct ―institutional support quality‖ exhibits similar features with that of the
updated ISS model‘s construct named ―service quality”, it is quite different from the service
quality construct of the ISS model in terms of breadth and measures of the construct (see Table
39 for details).

Only few studies addressed the entire constructs of the updated ISS model in a single study based
on different IS stakeholders‘ perspective (Yakubu & Dasuki, 2018; Urbach and Muller, 2012).
To this end, this study can be regarded as one of the few studies which considered all the
constructs of the updated ISS model prior to developing the proposed model of this study.
Furthermore, the ISS model was criticized for overlooking the users‘ perspectives (Dorbat, 2014;
Eom, 2011; Wang et al., 2007). Accordingly, the study‘s contribution was not only limited to
validate all the constructs of the ISS model but also extended it by adding three additional new
constructs incorporating different IS stakeholders‘ perspectives. To validate the study‘s
contributions to theory and practice, different parts of the study were presented and published in
IEEE conference proceedings (see appendix 9).

208| P a g e
Table 39: Summary of The Study‘s Key Findings in Relation to the Seed Constructs of the Updated ISS Model

The proposed model constructs and its measures The study’s seed constructs and its * The study’s
measures from the updated ISS model key findings
Constructs Measures Constructs Measures Constructs Measures
Technical support;
Technical support; Quality
Quality assurance
assurance Mechanism;
Mechanism;
Institutional support quality Motivation; Computer
Motivation; Computer
emerged as a new construct training; Intellectual
Institutional Support Quality training; Intellectual - -
not available in the updated property protection; E-
property protection; E-
ISS model. learning policy;
learning policy;
Availability of e-learning
Availability of e-
infrastructure
learning infrastructure
Ease of use; user Ease of use;
friendly; systems functionality; -Confirmed this construct as - Similar measures with
interactivity; systems reliability; usability; of the updated ISS model in that of the Systems quality
E-Learning Systems Quality Systems Quality
response; systems flexibility; data the context of this study as construct of the updated
useful features; systems quality; portability; well. ISS model
availability; secure adaptability;
Accuracy; timeliness;
completeness; -confirmed this construct of - Similar measures with
Course Quality, the updated ISS model with
E-Learning Content Quality Information relevance; ease of that of the Information
Relevant content; Up- modification made to the
Quality understanding; quality construct of the
to-date naming of the construct (as
personalization; mentioned in chapter 4&5) updated ISS model
consistency
Computer self-efficacy; Measures of this new
Attitude towards e- Instructors‘ e-learning factor construct: Computer self-
Instructors’ E-Learning learning; Creating e- - - emerged as a new construct efficacy; Attitude towards
Factor not available in the updated e-learning; Creating e-
learning environment; ISS model. learning environment;
timely response timely response
Measures of this new
Learners‘ e-learning factor
Computer self-efficacy; construct: Computer self-
emerged as a new construct
Learners’ E-Learning Factor Attitude towards e- - - efficacy; Attitude towards
not available in the updated
learning e-learning
ISS model.

209| P a g e
- Confirmed measures of
-confirmed the systems
Frequency of use; -Frequency of use and the systems usage
Systems Usage Systems Usage usage construct of the
Dependability dependability construct of the updated
updated ISS model
ISS model.
- confirmed measures of
-Enjoyable experience
Enjoyable experience; - also confirmed this the user satisfaction
User Satisfaction User Satisfaction and Overall user
Overall satisfaction construct of the ISS model construct of the updated
satisfaction
ISS model
- confirmed this construct
Cost savings, time
Time savings; Cost but renamed the construct as
savings, improved - confirmed measures of
savings; Enhanced E-learning systems outcome
E-Learning Systems Outcome Net Benefits work quality, this construct of the
Academic and teaching to fit with the study‘s
improved updated ISS model.
performance context (as discussed in
performance.
chapter 4&5)
* The study’s key findings: It presents the study‘s key findings by comparing the proposed model constructs & its measures in relation to the seed constructs of
the updated ISS model.

210| P a g e
The study’s objective #2: To place the identified factors that determine e-learning systems
success in a holistic model that may help Ethiopian higher
education institutions evaluate their e-learning systems success.

Identified factors that determine e-learning systems success to achieve the first research objective
were placed in a model and their relationships were established by formulating hypotheses based
on qualitative data findings and grounding it with literature. Eighteen hypotheses were proposed
to signify the relationships between the constructs of the model. The model‘s constructs and
relationships between these constructs were tested employing Structural Equation Modeling
(SEM) on two levels: students and instructors.

The items of the constructs of the model for students‘ and instructors‘ samples were adapted to
represent the students‘ and the instructors‘ samples with the help of e-learning experts. The
measurement models were tested using SEM-PLS. Accordingly; all the eight constructs
(institutional support quality, Learners‘ e-learning factor, instructors‘ e-learning actor, e-learning
systems quality, e-learning content quality, systems usage, user satisfaction and e-learning
systems outcome) were found out to be valid and reliable measures of e-learning systems success
under the context of EHEIs.

Furthermore, the structural model tests the relationships among the constructs of the proposed
model as represented by the eighteen hypotheses. The followings are key findings of testing the
structural model based on the students‘ samples:

1. Institutional support quality significantly and directly affected five constructs: e-learning
content quality, e-learning systems quality, systems usage, user satisfaction and
instructor‟s e-learning factor.
2. E-learning systems quality significantly and directly affected three constructs: e-learning
content quality, systems usage and user satisfaction.
3. Instructors‟ e-learning factor significantly and directly affected two constructs: e-
learning content quality and systems usage.
4. User satisfaction is a determinant of two constructs: systems usage and e-learning
systems outcome.
5. Systems usage determines one construct: e-learning systems outcome.

211| P a g e
6. A substantial predictive power was observed on the endogenous construct systems usage
by the combined effect of six exogenous constructs: Institutional support quality,
Learners‟ e-learning factor, instructors‟ e-learning factor, e-learning systems quality,
user satisfaction and e-learning content quality.
7. A substantial predictive power was observed on the endogenous construct e-learning
systems outcome by the combined effect of two exogenous constructs: user satisfaction
and systems usage
8. A moderate predictive power was observed on the endogenous construct User
satisfaction by the combined effect of five exogenous constructs: Institutional support
quality, Learners‟ e-learning factor, instructors‟ e-learning factor, e-learning systems
quality, and e-learning content quality.
9. A moderate predictive power was observed on the endogenous construct e-learning
systems quality by the exogenous construct: institutional support quality.
10. A moderate predictive power was observed on the endogenous construct e-learning
content quality by the combined effect of three exogenous constructs: institutional
support quality, e-learning systems quality and instructors‟ e-learning factor.
11. Sound predictive relevance and effect size were observed on three system outcomes: user
satisfaction on systems usage; systems usage on e-learning systems outcome; and user
satisfaction on e-learning systems outcome.

On the other hand, the structural model based on the instructors‘ samples show:

1. Institutional support quality significantly and directly impacted four constructs: systems
usage, user satisfaction, e-learning systems quality and instructors‟ e-learning factor.
2. E-learning systems quality significantly and directly impacted one construct: e-learning
content quality
3. E-learning content quality is a determinant of two constructs: user satisfaction and
systems usage.
4. Instructors‘ e-learning factor significantly and directly impacted two constructs: e-
leaning content quality and user satisfaction.
5. User satisfaction significantly determined two constructs: systems usage and e-learning
systems outcome.

212| P a g e
6. A substantial predictive power was observed on the endogenous construct systems usage
by the combined effect of six exogenous constructs: Institutional support quality,
Learners‟ e-learning factor, instructors‟ e-learning factor, e-learning systems quality,
user satisfaction and e-learning content quality.
7. A substantial predictive power was observed on the endogenous construct e-learning
systems outcome by the combined effect of two exogenous constructs: user satisfaction
and systems usage.
8. A moderate predictive power was observed on the endogenous construct User
satisfaction by the combined effect of five exogenous constructs: Institutional support
quality, Learners‟ e-learning factor, instructors‟ e-learning factor, e-learning systems
quality, and e-learning content quality.
9. A moderate predictive power was observed on the endogenous construct e-learning
systems quality by the exogenous construct: institutional support quality.
10. A moderate predictive power was observed on the endogenous construct e-learning
content quality by the combined effect of three exogenous constructs: institutional
support quality, e-learning systems quality and instructors‟ e-learning factor.
11. A moderate predictive power was observed on the endogenous construct Instructors‟ e-
learning factor by one exogenous construct: institutional support quality.
12. Sound predictive relevance and effect size were observed on the system outcomes: User
satisfaction on systems usage, systems usage on e-learning systems outcome, and user
satisfaction on e-learning systems outcome.

Accordingly, it can be noted that the study‘s objective #2 was achieved based on tests of the
structural model on two levels: Students‘ samples and Instructors‘ samples.

213| P a g e
7.3 Conclusion

This study identified determinants of e-learning systems success factors taking the case of
EHEIs. Furthermore, the identified factors were placed in a model which undergoes further
validation employing Structural Equation Modeling to determine the predictive power of the
model to measure e-learning systems success with the case of EHEIs. Accordingly, the following
main conclusions are forwarded based on the study‘s key findings:

1. The issue of information systems success in general and e-learning systems success in
particular has received the attention of several IS scholars and practitioners. However,
what determines the success of IS or e-learning remains a debatable domain of research
mainly due to its contextual factors. Accordingly; after reviewing relevant empirical
studies, the study findings also proved that determinants of e-learning systems success are
context dependent based on its user‘s types, technology and institutional factors taking
EHEIs context. Furthermore, from the study‘s findings it can be inferred that the issue of
e-learning systems success is complicated due to the involvement of different groups of e-
learning stakeholders.

2. The prevalent of Information Technology is increasing in higher education institutions. E-


learning systems is one of the components of IT which incorporated the traditional
pedagogy (the conventional teaching and learning process) with the advantages of IT to
easily execute the teaching and learning process without the limitations of time and place.
The study findings also revealed that EHEIs adopting e-learning systems has strong
ambitions to integrate e-learning systems in the teaching and learning process. However,
sustainably implementing the system is crippled with multi-dimensional factors.
Consequently, EHEIs are not benefiting the opportunities created for use of e-learning
systems. To this end, the need to understand determinant factors of e-learning systems
success would provide EHEIs a structured way of sustainably implementing e-learning
systems and realize benefits of the investments outlaid on the system. Accordingly, the
study findings brought into picture these determinants by placing them in a holistic model
that would help EHEIs evaluate their e-learning systems success.

214| P a g e
3. The study found eight higher level (i.e. three new higher level categories and five existing
higher level categories from the updated ISS model) and thirty-one lower level categories
which determine e-learning systems success based on students‘, instructors‘, e-
learning/ICT experts‘ and ICT Directors‘ perspectives taking the case of EHEIs (see table
39). Unlike the updated ISS model, these findings implied that e-learning systems success
is multi-factorial which is not a function of technology alone.

4. The study proposed a holistic model by placing the identified determinants of e-learning
systems success in the model. To this end, the study findings implied that the model is
explained in a casual and process approach embracing interrelated components: the
creation and sustaining component, the use of the system component and the impact of use
(systems outcome) component. Therefore, the overall e-learning systems success is
determined based on these three phases or components of the model. Any shortfalls in any
of the components will create a negative impact in the overall success of the system.

5. Unlike the updated ISS model, the creation and sustaining component/phase of the e-
learning systems success model is subject to wide array of factors such as institutional
factors expressed in terms of technical support responsible to serve the technical service
requests/needs of instructors and students as typically related with systems usability
problems, motivation provided to instructors and other technical staff involved in the
creation and maintaining the system, consistent computer training provided to students
and instructors, protection of intellectual properties of e-learning content developers/
subject matter experts (SME), devising an operational e-learning policy which implied
that leadership in EHEIs under study is critical to see the opportunities available and bring
them to reality, and availing e-learning infrastructure which is directly related with the
creation and implementation/sustainability of the system; e-learning systems quality as
expressed in terms of ease of use of the system, user friendly, systems interactivity,
systems response, systems useful features, systems availability and security; e-learning
content quality determined in terms of course quality, relevant content, and up-to-date;
computer self-efficacy, attitude towards e-learning, creating e-learning environment and
timely response as instructor‘s e-learning factor determining the creation of the system

215| P a g e
and its sustainability; and learners‘ e-learning factor determined in terms of learners‘
attitude towards e-learning and their level of computer self-efficacy.

6. The study findings suggest that the creation and sustainability of the system component is
an antecedent of the use of the system phase of the model. The study discovered two
constructs under the system use phase of the model (i.e. user satisfaction and systems use
constructs). The user satisfaction component of this phase also predicts (i.e. as a
sufficient but not necessarily) the degree of use of the system.

7. From the study findings, it is revealed that the use of the system phase ultimately
determines the impact of use phase of the model. It was found that students and
instructors would evaluate the use of the system in terms of the benefit it brings while
saving their time, cost and enhancing their performances. This last phase of the model has
sound predictive power, path coefficients, predictive relevance and effect size as
impacted by the use of the system phase/component of the model.

8. The model is distinct in terms of the details it provides to describe each of the higher
level categories as operational constructs. Accordingly, the study discovered several e-
learning systems success metrics with the case of EHEIs context.

7.4 Contributions of the Study

The following sub-sections present contributions of the study for theory and practice.

7.4.1 The Study’s Contribution to Theory

This sub-section presents the study‘s contributions to theory:

1. A primary contribution of this research is to identify determinants of e-learning systems


success and placed them in a model with the case of Ethiopian higher education institutions
(EHEIs). To this end, the study contributes a model towards e-learning systems success.
Only few studies have taken in to account diverse views of e-learning stakeholders to
develop and test e-learning systems success model (Yakubu et al., 2018; Aparicio et al.,
2016). Accordingly, this model is holistic because it represented diverse perspectives from
several different types of e-learning stakeholders such as students, instructors, e-learning
and ICT experts, and ICT directors (i.e. being one of the top management members of

216| P a g e
EHEIs under study). The results of the study model confirm the validity and reliability of
the model to measure e-learning systems success across EHEIs.
2. The study also confirmed the validity and reliability of the updated ISS model with some
modifications to evaluate e-learning systems success in EHEIs. However, one of the
constructs of the updated ISS model, service quality, was too narrow to represent the
overall institutional support as to be provided to users of e-learning systems (mainly,
instructors and students). Accordingly, Institutional support quality replaced as new
construct over the service quality construct of the updated ISS model.
3. The study contributes several new measures (i.e. lower level categories) which are not
available under the constructs of the updated ISS model (see Table 39).
4. The study extended the updated ISS model by adding three new constructs: Learners‘ E-
Learning Factor; Instructors‘ E-Learning Factor; and Institutional Support Quality.
5. Alvesson & Karreman (2007), Johns (2006) and Eisenhardt (1989) pointed out that new
contexts can result in changing the original theorized relationships to be non-significant,
altering the direction of relationships among the constructs and creating new relationships.
Each change can result in the creation of new knowledge. To this end; a part from
extending the updated ISS model by adding new constructs, the study found new
relationships. Therefore, eleven new relationships were identified in contrast to the updated
ISS model:
 Institutional Support Quality & Systems Usage
 Institutional Support Quality & User Satisfaction
 Institutional Support Quality & Instructors‟ E-Learning Factor
 Institutional Support Quality & E-Leaning Systems Quality
 Institutional Support Quality & E-Learning Content Quality
 E-Learning Systems Quality And E-Learning Content Quality
 Learners‟ E-Learning Factor & Systems Usage
 Learners‟ E-Learning Factor & User Satisfaction
 Instructors‟ E-Learning Factor & E-Learning Content
 Instructors‟ E-Learning Factor & User Satisfaction
 Instructors‟ E-Learning Factor & Systems Usage

217| P a g e
Furthermore; the relationship between constructs which are available in the updated ISS model
but failed to be supported under this study are:
o On students‘ sample:
 E-Learning Content Quality on User Satisfaction
 E-Learning Content Quality on Systems Usage
o On instructors‘ sample:
 E-Learning Systems Quality on User Satisfaction
 E-Learning Systems Quality on Systems Usage
 Systems Usage on E-Learning Systems Outcome
6. DeLone & McLean (2003) did not investigate the relationship between exogenous
constructs. To this end, the authors called for further research to investigate such
relationship. In response to this call, our study discovered new relationship between two
exogenous constructs (i.e. between e-learning systems quality and e-learning content
quality) which can be considered as another contribution to the existing IS body of
knowledge.
7. Eisenhardt (1989) described the weakness of her proposed theory development framework
by stating that such theory development technique lacks to provide a guide with regard to
which relationships between constructs are significant and which are not. Moreover; since
the technique didn‘t provide factor reduction methods, the richness of the data
collection/evidence might make the resultant theory to be overly complex lacking one of
the quality criteria of a good theory: ―Parsimonious‖. To this end; this study adopting
Eisenhardt‘s theory development process attempted to employ Structural Equation
Modeling to fill these weaknesses of the theory development technique which might be
considered as one of the new insights to this type of theory development technique.

7.4.2 Contribution to Practice

The study brought the following contributions to practice:

1. The study provided critical factors that determine the success of e-learning systems in
Ethiopian higher education institutions adopting e-learning systems and also to other EHEIs
in their planning phase of introducing e-learning systems.

218| P a g e
2. The study also provided a model based on several e-learning stakeholders that would help
EHEIs to practically evaluate their e-learning systems current status and a guide to structural
implementations.
3. The study brings attention to top managements of EHEIs with regard to critical institutional
support factors that determines e-learning systems success.
4. Courses of actions on how to improve institutional support quality, e-learning systems
quality, e-learning content quality, enhance instructors‘ ability to run e-learning based
courses, and overall instructors‘ and students‘ satisfaction and e-learning systems usage were
conceptualized from the study for practice based on key e-learning stakeholders.
5. Critical factors that determine e-learning systems success are identified in a causal and
process framework (approach) in order to provide University‘s top management to easily
understand the cause and effect relationships among the e-learning systems success factors.
Furthermore, the direct effects between the constructs of the model will help to guide EHEIs
by placing different levels of emphases or priorities among the determinants while
implementing e-learning in their respective institutions.
6. Ethiopian higher education institutions and other institutions in other similar settings are
provided with readymade e-learning systems success measurement items that would serve as
quality assurance mechanisms while evaluating their current status of e-learning systems
applications.

7. Education plays a pivotal role in any society. A change in the way education is being
delivered through the use of IT is a change in the way future organizations, groups and
societies would use IT. To this end, the study findings suggest that the inception point of this
change is to sustainably implement new way of educational delivery system such as e-
learning which integrates online network across different groups of educational stakeholders
to communicate instantly in an efficient and effective manner. Sustainably implementing
such type of new educational system would contribute a massive positive impact in almost
every corner of the society. Successfully implementing e-learning systems in EHEIs might
provide a good training ground to sustainably implement other similar e-service delivery
systems by use of ICT such as e-government, e-tax, e-banking and other public/private e-
services in a way that would save cost, time and reduced bureaucracy.

219| P a g e
7. 5 The Study’s Limitation and Implications for Further Research

The study proposed an e-learning systems success model adopting Eisenhardt‘s theory building
framework. However, the resultant model is not free from limitations. The followings are some
of the limitations of the study that may be addressed by future research:

1. In the first phase of the study, it was found out that EHEIs under study use Moodle.
Accordingly, the study findings are based on the paradigm of e-learning and a specific LMS
usage. The results, therefore, map into this specific LMS. The specificity of this study is that
the study respondents used distinct LMS that meet their desired goals. Further studies that
assess other e-learning platforms might be another area of research to validate the reliability
and representativeness of the model across different LMS/e-learning platforms.
2. With regard to the overall model, future research need to test or validate the association
among the different constructs of the model apart from the hypothesized relationships. In this
study, the model was tested over students and instructors who have at least one e-learning
course experience adopting cross-sectional study. This minimum standard/criteria was
adopted in order to grasp wider shared perspectives (i.e. adopting theoretical sampling and
theory of saturation) as to meet the study‘s objectives and also taking in to account e-learning
systems is used in EHEIs voluntarily (not mandatorily) which became difficult to get
different kinds of more specific focus groups across students and instructors. Furthermore,
educational promises of the Internet have not been fully embraced nor exploited in e-learning
across EHEIs under study which also limited the development and usage of e-learning
systems. However; to minimize this limitation, the study employed multiple data collection
methods to triangulate the study findings performing within case and across case analysis.
Future research might test the model across students of different years of study, students and
instructors different levels of e-learning experience, based on different departments or study
disciplines in order to further validate the model. More importantly, the creation and
sustainability components/factors are antecedents of the remaining component factors of the
model which deserves special attention while validating the model across different system
users.
3. This study was only limited to higher education institutions offering undergraduate and post-
graduate programs. This study‘s limitation might be an obstacle to generalize the findings of

220| P a g e
the study to other organizations and training institutions adopting e-learning for training and
other purposes. However; the researcher has a strong belief in that EHEIs and other non-
EHEIs (i.e. business or non-business organizations and training institutions) share similar
contexts in which case the proposed model might also be applicable across these
organizations/institutions. Accordingly, future research might test the model across these
different organizations and training institutions to further validate or extend the model.
4. In line with Wanger et al (2008), the study considered e-learning stakeholders who have
direct impact in the implementation of e-learning systems during the teaching and learning
process. Another future expansion is to consider additional stakeholder groups such as
employers, accreditation bodies and technology providers while validating the model.
5. In order to exhaustively explore e-learning systems success factors without respondents‘
potential language barriers during the first phase of the study, in-depth personal interviews
and focus group discussions were conducted in Amharic language, which is the working
language of the country. The researcher translated the Amharic interview and focus group
discussions into English language. The researcher acknowledges that there might be some
distortion of meanings in the translation. However, it was attempted to minimize such kind of
potential distortion of meanings by sending the translated version to purposefully selected
study group of participants if the English version correctly represented their views and also
took assistance from an English translator while preparing the final version of the transcribed
notes.
6. Some important factors such as cost and time savings that could measure the construct/higher
level category ―e-learning systems outcome‖ were found out as sub-categories of this
construct. To this end, the researcher come-across to explore another research area such as
the contributions of these success factors in reducing possible institutional costs, enhancing
educational qualities and promoting organizational competitiveness. Future research might
investigate these research areas and extend the model.

221| P a g e
References
Abbad, M.M., Morris, D. and Nahlik, C. (2009). Looking under the bonnet: factors affecting
student adoption of e-learning systems in Jordan. The International Review of Research
in Open and Distance Learning, 10(2), 1-22.
Abdellatief, M., Sultan, A.B.M., Jabar, M.A, and Abdullah, A. (2011). A Technique for Quality
Evaluation of E-learning from Developers Perspective. American Journal of Economics
and Business administration, 3 (1), 157-164.
Abdul,G.A.H. (1997). Determinants of computer-mediated communication success among
knowledge workers in Saudi Arabia. Journal of Computer Information Systems, 38(1),
55–66.
Abdul, K.M. R., & Hashim, Y. (2004). The experience of the e-learning implementation at the
Universiti Pendikan Sultan Idris, Malaysia. Malaysian Online Journal of Instructional
Technology, 1(1), 50-59.
Adama Science and Technology University. (2017). Annual E-learning Report.
Adams, D. A., Nelson, R. R., & Todd, P. A. (1992). Perceived usefulness, ease of use, and usage
of information technology: a replication. MIS Quarterly, 16(2), 227–248.
Addison,W., Floropoulos, J., Spathis, C., Halvatzis, D. and Tsipouridou, M. (2010). Measuring
the success of the Greek Taxation Information System. International Journal of
Information Management, Vol. 30 No. 1, 47–56.
Adeyinka, T. and Mutula, S. (2010). A proposed model for evaluating the success of WebCT
course content management system. Computers in Human Behavior, Vol. 26 No. 6,
1795–1805.
Al-Huwail N., Al-Sharhan S. and Al-Hunaiyyan A. 2007 Learning Design For A Successful
Blended E-Learning Environment, Cultural Dimensions, Hawally, Kuwait.
Aladwani, A.M. (2002). Organizational Actions, Computer Attitudes, and End-User Satisfaction
In Public Organizations: An Empirical Study. Journal of End User Computing, 14(1),
42–49.
Albusaidi, K. A., & Al-shihi, H. (2012). Key factors to instructors‘ satisfaction of learning
management systems in blended learning. Journal of Computing in Higher Education,
24(1), 18–39.

222| P a g e
Alenezi, A., & Shahi, K. (2015). Interactive E-Learning through Second Life with Blackboard
Technology. Procedia Social and Behavioural Sciences, 176, 891-897.
Alenezi, H., Tarhini, A., & Masa‘deh, R. (2015). Investigating the Strategic Relationship
between Information Quality and EGovernment Benefits: A Literature Review.
International Review of Social Sciences and Humanities, 9 (1), 33-50.
Alexander, S. (2001). E-learning developments and experiences. Education and Training,
43(4/5), 240-248.
Almajali, D., & Al-Lozi, M. (2016). Determinants of the Actual Use of E-Learning Systems: An
Empirical Study on Zarqa University in Jordan. Journal of Social Sciences, 5 (2), 1-29.
Almutairi, H., and Subramanian, G.H. (2005). An empirical application of the DeLone and
McLean model in the Kuwaiti private sector. Journal of Computer Information Systems
45(3), 113–122.
Alsabawy, A.Y. (2013). Measuring E-Learning Systems Success. PhD thesis, University of
Queensland.
Alsabawy, A.Y., Cater,S.A., and Soar,J. (2011). Measuring e-learning systems success (Research
in progress). PACIS Proceedings.
Alvesson, M. and Kärreman, D. (2007). Constructing Mystery: Empirical Matters in Theory
Development. Academy of Management Review, 32(4). 1265-1281.
Anderson, T. (2008). Towards a theory of online learning: in theory and practice of online
learning. Canada: Athabasca University Press.
Anderson, T & Elloumi, F. (2004). Theory and Practice of Online Learning. Athabasca
University, Athabasca: Canada.
Andrade, A. D. (2009). Interpretive research aiming at theory building: Adopting and adapting
the case study design. The Qualitative Report, 14(1), 42-60.
Ang, S., and Soh, C. (1997). User information satisfaction, job satisfaction, and computer
background: an exploratory study. Information & Management, 32(5), 255–266.
Angeli, C., & Valanides, N. (2009). Epistemological and methodological issues for the
conceptualization, development, and assessment of ICT–TPCK: Advances in
Technological Pedagogical Content Knowledge (TPCK). Computers & Education, 52(1),
154-168.

223| P a g e
Aparicio, M., Bacao, F., & Oliveira, T. (2016). An e-Learning Theoretical Framework.
Educational Technology & Society, 19 (1), 292–307.
Aparicio, M., Bacao, F., & Oliveira, T. (2016). Cultural impacts on e-learning systems‘ success.
The Internet and Higher Education, 1-44.
Arbaugh, J. B. (2000), Virtual classroom characteristics and student satisfaction with
internetbased MBA courses. Journal of Management Education, 24(1), 32–54.
Arbaugh, J. B. (2002). Managing the on-line classroom: a study of technological and behavioral
characteristics of web-based MBA courses. Journal of High Technology Management
Research, 13, 203–223.
Arbaugh, J. B., & Duray, R. (2002).Technological and structural characteristics, student learning
and satisfaction with web-based courses– An exploratory study of two on-line MBA
programs. Management Learning, 33(3), 331–347.
Agarwal, R., and Prasad, J. (1999). Are individual differences germane to the acceptance of new
information technologies?. Decision Sciences, 30(2), 361–391.
Arkorful, V., and Abaidoo, N. (2014). The role of e-learning, the advantages and disadvantages
of its adoption in Higher Education. International Journal of Education and Research,
2(12), 297-410.
Assefa, T. (2014). Enabling Knowledge Sharing in the Workplace: The Case of Commercial
Bank of Ethiopia (CBE). PhD thesis, Addis Ababa University.
Asyraf, W.M, & Afthanorhan, B.W. (2013). A Comparison of Partial Least Square Structural
Equation Modelling (PLS-SEM) and Covariance Based Structural Equation Modeling
For Confirmatory Factor Analysis. International Journal of Enjineering science and
Innovative Technology (IJESIT), 2 (5), 198-205.
Attride-Stirling, J. (2001). Thematic networks: an analytic tool for qualitative research.
Qualitative Research, 1(3), 385-405.
Awang, z., Afthanorhan, A., & Asri, M. (2015). Parametric and Non Parametric Approach in
Structural Equation Modeling (SEM): The Application of Bootstrapping. Modern Applied
Science, 9 (9), 58-67.
Awang, Z, Ahmed, J.H., & Zin, N.M. (2010). Modeling The Multimediator on Motivation
Among Youth In Higher Education Institution Towards Volunteerism Program.
Mathematical Theory and Modelling, 3 (7), 64-70.

224| P a g e
Balaban, I., Mu, E. and Divjak, B. (2013). Development of an electronic portfolio system success
model: An information systems approach. Computers & Education, 60 (1), 396–411.
Bandura A. (1982) Self-efficacy mechanism in human agency. Journal of American Psychology,
37, 122-147.
Bandura A. (1986a) The explanatory and predictive scope of self-efficacy theory, Journal of
Society and Clinical Psychology, 4, 359-373.
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change.
Psychological Review, 84(2), 191-215.
Bandura, A. (1978). Reflections on self-efficacy. Advances in Behavioral Research and Therapy,
1(4), 237-269.
Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory.
Englewood Cliffs, NJ: Prentice Hall.
Bandura, A. (1997). Self-Efficacy: The Exercise of Control, W. H. Freeman and Company.
Barki, H., and Hartwick, J. (1994). User participation, conflict, and conflict resolution: the
mediating roles of influence. Information Systems Research, 5(4), 422–438.
Bass, J. M. (2011). An Early-Stage ICT Maturity Model derived from Ethiopian education
institutions. International Journal of Education and Development using Information and
Communication Technology (IJEDICT), 7(1), 5-25.
Baxter, P., & Jack, S. (2008). Qualitative case study methodology: Study design and
implementation for novice researchers. The Qualitative report, 13(4), 544-559.
Becker, R., & Jokivirta, L. (2007). Online learning in universities: http://www.obhe.ac.uk.
Becker, K., Newton, C., and Sawang, S. (2013). A learner perspective on barriers to e-learning.
Australian Journal of Adult Learning, 53(2), 211-233.
Belcadhi, L. C., & Ghannouchi, S. A. (2015). How to Design an Active e-Course?: Meta Models
to Support the Process of Instructional Design of an Active e-Course. Journal of
Information Technology Research, 8(1), 82–106.
Bell, M., & Bell, W. (2005). It's installed ... now get on with it! Looking beyond the software to
the cultural change. British Journal of Educational Technology, 36(4), 643-656.
Belwal,R., Ayalew, D., & Gaim, M. (2010). The Challenges of the Curtin-AVU- AAU Distance
Learning in Ethiopia: A Case Study. MERLOT Journal of Online Learning and
Technology, 6(4), 799-811.

225| P a g e
Bermejo, S. (2005). Cooperative electronic learning in virtual laboratories through forums. IEEE
Transactions on Education, 48(1), 140-149.
Beyene, B. (2006), ―Problems and Prospects of e-learning in Ethiopia‖ A Paper Presented on the
Conference eLearning Africa, Addis Ababa.
Beyene, B. (2010). A Model for an e-Portfolio-based Reflective Feedback: Case Study of e-
learning in Developing Countries: a PHD Thesis, Universtat Hamburg.
Beyth-Marom, R., Chajut, E., Roccas, S., & Sagiv, L. (2003). Internet-assisted versus traditional
distance learning environments: factors affecting students‘ preferences. Computers &
Education, 41(1), 65–76.
Bharati, P. (2002). People and information matter: task support satisfaction from the other side.
Journal of Computer Information Systems, 43(2), 93–102.
Bharati, P., and Chaudhury, A. (2006). Product customization on the web: an empirical study of
factors impacting choiceboard user satisfaction. Information Resources Management
Journal, 19(2), 69–81.
Bhattacherjee, A. (2012). Social Sceince Research: Principles, Methods and Practices. 2 nd ed.
Global text Project.
Bhattacherjee, A., & Premkumar, G. (2004). Understanding changes in belief and attitude toward
information technology usage: a theoretical model and longitudinal test. MIS Quarterly,
28(2), 229–254.
Bhrommalee, P. (2013). Students‘ attitudes toward e-learning: A case study in a Thai University.
Clute Institute International Conference, pp. 567–577.
Bhuasiri, W., Xaymoungkhoun, O., Zo, H., Rho, J.J. and Ciganek, A.P. (2012). Critical success
factors for e-learning in developing countries: A comparative analysis between ICT
experts and faculty. Computers & Education, 58 (2), 843–855.
Bindhu, A., & Manohar, H.L. (2015). Dimensions of E-learning Effectiveness – A theoretical
Perspective. International Journal of Economic Research, 12(2), 411-416.
Birch, D., & Burnett, B. (2009). Bringing academics on board: Encouraging institution wide
diffusion of e-learning environments. Australasian Journal of Educational Technology,
25(1), 117-134.

226| P a g e
Blocher, J. M.; Sujo de Momtes,L., Willis, E. M., & Tucker, G. (2002). Online learning:
Examining the successful student profile. The Journal of Interactive Online Learning,
1(2), 1-12.
Boulesteix, A.L., & Strimmer, K. (2007). Partial least squares: a versatile tool for the analysis of
high-dimensional genomic data. Briefings in Bioinformatics, 8(1), 32-44.
Boyi, A. A. (2014). Education and sustainable national development in Nigeria: challenges and
way forward. International Letters of Social and Humanistic Sciences, 14, 65-72.
Breslin, C., Nicol, D., Grierson, H., Wodehouse, A., Juster, N., & Ion, W. (2007). Embedding an
integrated learning environment and digital repository in design engineering education:
lessons learned for sustainability. British Journal of Educational Technology, 38(5), 805-
816.
Brosnan, M. J. (1999). Modeling technophobia: A case for word processing. Computers in
Human Behavior, 15 (2) 105-121.
Bryman, A. (2012). Social research methods (4th ed.). USA: Oxford University Press.
Bryman, A.,& Bell, E. (2011). Business research methods (3rd ed.). New York: Oxford
University Press, USA.
Bryman, A., Bell, E. (2003). Business Research Methods. Oxford University Press, New York.
Burkhardt, M. E., & Brass, D.J. (1990). Changing Patterns or Patterns of Change: The Effect of a
Change in Technology on Social Network Structure and Power. Administrative Science
Quarterly, 35, 104-127.
Burton,J.A., and Gallivan, M.J. (2007). Toward a deeper understanding of system usage in
organizations: a multilevel perspective. MIS Quarterly, 31(4), 657–680.
Calder, B.J., (1977). Focus groups and the nature of qualitative marketing research. Journal of
Marketing Research, XIV, 353–364.
Caldeira, M.M, and Ward, J.M. (2002). Understanding the successful adoption and use of IS/IT
in SMEs: an explanation from Portuguese manufacturing industries. Information Systems
Journal, 12(2), 121–152.
Carvalho, A., Areal, N., & Silva, J. (2011). Students' perceptions of Blackboard and Moodle in a
Portuguese university. British Journal of Educational Technology, 42(5), 824–841.

227| P a g e
Cassidy, S & Eachus, P. (2002). Developing the computer user self-efficacy (CUSE) scale:
Investigating the relationship between computer self-efficacy, gender and experience
with computers. Journal of Educational Computing Research, 26(2), 169-189.
Cenfetelli, R. T., & Bassellier, G. (2009). Interpretation of formative measurement in
information systems research. MIS Quarterly, 689-707.
Charmaz, K. (1996). The search for Meanings- Grounded Theory. In. J. A. Smith, r. Harre, & L.
Van Langenhove (Eds.). Rethinking Methods in Psychology (pp.27-49). London: sage
Publications.
Chau, P. Y. K. (1997). Re-examining a model for evaluating information center success using a
structural equation modeling approach. Decision Sciences, 28(2), 309–334.
Chau, P.Y.K., and Hu, P.J. (2002). Examining a model of information technology acceptance by
individual professionals: an exploratory study. Journal of Management Information
Systems, 18(4), 191–230.
Chen, H. (2010). Linking employees‘ e-learning system use to their overall job outcomes: An
empirical study based on the IS success model. Computers & Education, Vol. 55 No. 4,
1628–1639.
Chen, W., & Hirschheim,R. (2004). A paradigmatic and Methodological examination of
Information systems research. Information Systems journal, 14(3), 197-235.
Cheng, Y.Y. (2012). Effects of quality antecedents on e-learning acceptance. Internet Research,
22 (3), 361–390.
Chiu, C.M., Chiu, C.S., & Chang, .C. (2007). Examining the integrated influence of fairness and
quality on learners‘ satisfaction and web-based learning continuance intention.
Information Systems Journal, 17(3), 271-287.
Chiu, C.M., Hsu,M.H& Sun, S.Y. (2005). Usability, Quality, Value, and E-learning continuance
decisions. Computer & Education, 45 (2005), 399-416.
Chiu, C.M., Hsu, M.H & Sun, S.Y. (2005). Usability, Quality, Value, and E-learning
continuance decisions. Computer & Education, 45 (2005), 399-416.
Chiu, C. M., & Wang, E. T. G. (2008). Understanding web-based learning continuance intention:
the role of subjective task value. Information & Management, 45(3), 194–201.

228| P a g e
Cho, V., Cheng, T.C.E. and Hung, H. (2009). Continued usage of technology versus situational
factors: an empirical analysis. Journal of Engineering and Technology Management,
26(4), 264–284.
Choe, J.M. (1996). The relationships among performance of accounting information systems,
influence factors, and evolution level of information systems. Journal of Management
Information Systems, 12(4), 215–239.
Chu, R. J., & Chu, A. Z. (2010). Multi-level analysis of peer support, internet self-efficacy and
e-learning outcomes – the contextual effects of collectivism and group potency.
Computers & Education, 55(1), 145–154.
Chuttur, M. (2009). Overview of the technology Acceptance model: Origins, developments and
Future Directions. Indiana University, USA, Working papers on Information Systems,
9(37).
Chwelos, P., Benbasat, I., & Dexter, A. S. (2001). Empirical test of an EDI adoption model.
Information Systems Research, 12(3), 304–321.
Clark, R., & Mayer, R. (2011). E-Learning and the Science of Instruction: Proven Guidelines for
Consumers and Designers of Multimedia Learning (3rd ed.). Pfeiffer.
Cohen, E.B, & Nycz, M. (2006). Learning objects and e-learning: An informing science
perspective. Interdisciplinary Journal of Knowledge and Learning Objects, 2, 23-34.
Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences. Hillsdale NJ: Lawrence
Erlbaum.
Collis, J. and Hussey, R. (2003). Business Research: A Practical Guide for Undergraduate and
Postgraduate Students (2nd ed.). Basingstoke: Palgrave Macmillan.
Collopy, F. (1996). Biases in retrospective self-reports on time use: an empirical study of
computer users. Management Science, 42(5), 758–767.
Compeau, D. R., and Higgins, C. A. (1995). Application of Social Cognitive Theory to Training.
Compeau, D. R., Higgins, C. A., & Huff, S. (1999), Social cognitive theory and individual
reactions to computing technology: A longitudinal study. MIS Quarterly, 23(2), 145-158.
Coombs, C.R., Doherty, N.F., and Loan, J. (2001), The importance of user ownership and
positive user attitudes in the successful adoption of community information systems.
Journal of End User Computing, 13(4), 5–16.

229| P a g e
Cooper, P. A. (1993). Paradigm Shifts in Designed Instruction: From Behaviorism to
Cognitivism to Constructivism. Educational technology, 33(5), 12–19.
Cooper, D.R., & Schindler, P.S. (2006). Business Research Methods. (11th ed.), McGraw-Hill.
Creswell, J. (2007) Qualitative inquiry and Research Design: Choosing among Five Approaches
(2nd ed.). Thousand Oaks, CA: Sage.
Craik, F. I. M., & Lockhart, R. S. (1972). Levels of processing: A framework for memory
research. Journal of Verbal Learning and Verbal Behavior, 11, 671-684.
Craik, F. I., & Tulving, E. (1975). Depth of processing and the retention of words in episodic
memory. Journal of Experimental Psychology: General, 104(3), 268.

Creswell, J. (2003). Research design: Qualitative, quantitative, and mixed methods approaches
(2nd ed.).Thousand Oaks, CA: Sage.
Crossler, R., Johnston, B., Lowry, C., Hud, Q., Baskerville, R. (2013). Future directions for
behavioral information security research. Computers & Security, 32, 90-101.
Czerniewicz, L. and Brown, C. (2009). A study of the relationship between institutional policy,
organizational culture and e-learning use in four South African universities. Computers &
Education, 53(1). 121–131.
D‘ambra, J., and Rice, R.E. (2001). Emerging factors in user evaluation of the World Wide Web.
Information & Management, 38(6), 373–384.
Dania, P. O. & Enakrire, R. (2012).The utilization of information and communication technology
(ICTs) for effective teaching of social studies in secondary schools in Delta state. Prime
Research on Education (PRE), 2(10), 378-389.
Darawsheh, S., ALshaar, A., & AL-Lozi, M. (2016).The Degree of Heads of Departments at the
University of Dammam to Practice Transformational Leadership Style from the Point of
View of the Faculty Members. Journal of Social Sciences (COES&RJ-JSS), 5 (1), 56-79.

Darke, P. Shanks, G., & Broadbent, M. (1998).Successfully completing case study research:
Combining rigor, relevance and pragmatism. 8, 273-289.
Davis, F. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information
technology. MIS quarterly, 319–340.
Debus, M.(1990). Handbook for excellence in focus group research. Washington, D.C.:
Academy for Educational Development.

230| P a g e
Delbridge, R. and Kirkpatrick, I. (1994) ‗Theory and practice of participant observation‘, in V.
Wass and P. Wells (eds), Principles and Practice in Business and Management Research.
Aldershot: Dartmouth, pp. 35–62.
DeLone, W., & McLean, E. (2003). The DeLone and McLean model of information systems
success: a ten-year update. Journal of Management Information Systems, 19(4), 9–30.
DeLone, W., & McLean, E. (2002) Information systems success revisited. In Proceedings of the
35th Hawaii International Conference on System Sciences (SPRAGUE JR RH, Ed), IEEE
Computer Society, Hawaii, US, p 238.
DeLone, W., & McLean, E. (2004). Measuring e-commerce success: applying the DeLone &
McLean information systems success model. International Journal of Electronic
Commerce, 9(1), 31–47.
Demirkan, H., Goul, M., & Gros, M. (2010). A Reference Model for Sustainable E-Learning
Service Systems: Experiences with the Joint University/Teradata Consortium. Decision
Sciences Journal of Innovative Education, 8(1), 151-189.
Derouin, R. E., Fritzsche, B. A., & Salas, E. (2005).E-Learning in organizations. Journal of
Management, 31(6), 920-940.
Devaraj, S, Fan, M., and Kohli, R. (2002). Antecedents of B2C channel satisfaction and
preference: validating e-commerce metrics. Information Systems Research, 13(3), 316–
333.
Donoghue, S. L. (2006). Institutional Potential for Online Learning: A Hong Kong Case Study.
Educational Technology &Society, 9 (4), 78-94.
Dorobăţ,I. (2014). Models for Measuring E-Learning Success in Universities: A Literature
Review. Informatica Economică, 18 (3).77-90.
Downes, S. (2006). An introduction to connective knowledge. Presented at the International
Conference on Media, Knowledge & Education: Exploring New Spaces, Relations and
Dynamics in Digital Media Ecologies.

Dwivedi, R., Khurana, V., and Saxena, A. (2012), Empirical study on predictors of student
learning satisfaction from web based learning systems. International journal of advanced
research in computer engineering & technology, 1 (10).

231| P a g e
Dwivedi, Y. K., Wastell, D., Laumer, S., Henriksen, H. Z., Myers, M. D., Bunker, D.,
Srivastava, S. C. (2015). Research on information systems failures and successes: Status
update and future directions. Information Systems Frontiers, 1–15.
Eachus. P. and Cassidy, S. (2006), Development of the Web Users Self-Efficacy Scale (WUSE),
Issues in Informing Science and Information Technology, Vol. 3.
Edgerton, R. B. (1970). Methods in psychological anthropology. In R. Naroll & R. Cohen (Eds.),
A handbook of method in cultural anthropology (pp. 338-352).New York, NY: Columbia
University.
Eisenhardt, K.M. (1989). Building theories from case study research. The Academy of
Management Review, 14 (4), 532-550.
Eisenhardt, K.M., & Graebner, M. E. (2007). Theory building from cases: Opportunities and
challenges. Academy of Management Journal, 50(1), 25–32.
Eke, H.N. (2011). Modeling LIS students‘ intention to adopt e-learning: a case from University
of Nigeria, Nsukka. Library Philosophy and Practice.
Ellis, R. (2009). Students' experiences of e-learning in higher education: The ecology of
sustainable innovation. New York: Routledge.
Ellis, R. A., Ginns, P., & Piggott, L. (2009). E-learning in higher education: Some key aspects
and their relationship to approaches to study. Higher Education Research &
Development, 28(3), 303.
Eom, B.S. (2011). Relationships among e-learning systems and e-learning outcomes: a path
analysis model. Human Systems Management, 30(12), 229-241.
Eom, B.S., and Ahil, N. (2018). A System‘s View of E-Learning Success Model. Decision
Sciences Journal of Innovative Education, 16(1), 42-76.
Eom, B.S., Wen, H.J., & Ashill, N. (2006). The determinants of students‘ perceived learning
outcomes and satisfaction in University online education: An empirical investigation.
Decision Sciences Journal of innovative education, 4(2), 215-235.
Ertmer, P. A., & Newby, T. J. (1993). Behaviorism, cognitivism, constructivism: Comparing
critical features from an instructional design perspective. Performance Improvement
Quarterly, 6(4), 50–72.
Ethiopian Central Statistical Authority (ECSA). (2018).

232| P a g e
Ethiopian Ministry of Education.(2018). Annual educational statistical report. http:
www.moe.gov.et.
Ethiopian Ministry of Information and technology. (2018). Annual Bulletin. Addis Ababa,
Ethiopia.
Farahat, T. (2012). Applying the technology acceptance model to online learning in the Egyptian
Universities. Procedia - Social and Behavioral Sciences, 64, 95–104.
Federal Ministry of Education of Ethiopian, Education Management Information System (EMIS)
and ICT Directorate. (2016/17). Education Statistics Annual Abstract. Addis Ababa:
MoE.
Fishbein, M. and Ajzen, I. (1975), Belief, Attitude, Intention and Behavior: An Introduction to
Theory and Research.
Fitzgerald, G., and Russo, N.L. (2005). The turnaround of the London ambulance service
computer-aided dispatch system (LASCAD). European Journal of Information Systems,
14(3), 244–257.
Floropoulos, J., Spathis, C., Halvatzis, D., & Tsipouridou, M. (2010).Measuring the success of
the Greek taxation information system. International Journal of Information
Management, 30(1), 47-56.
Folden, R. W. (2012). General Perspective in Learning Management Systems. In R.Babo &
A.Azevedo (Ed.), Higher education institutions and learning management systems:
Adoption and standardization (pp. 1-27). Hershey PA: IGI Global
Forest, J.J. and Kinser, K. (2002). Higher education in the United States, an Encyclopedia. ABC
CLIO, Santa Barbara.
Fornell, C., and Cha, J. (1994). Partial least Squares, ―In Advanced Methods of Marketing
Research, Bagozzi, R.P (ed.), Blackwell, 152-178‖.
Fornell, C., & Lacker, D.F. (1981). Evaluating structural equation models with unobservable
variables and measurement error. Journal of marketing Research, 18, 39-50.
Freeze, R. D., K. A. Alshare, P. L. Lane, and H. J. Wen. (2010). IS Success Model in E-learning
Context based on Students‘ Perceptions. Journal of Information Systems Education, (21)
2, 173-184.
Friesen, N. (2009). Re-thinking e-learning research: Foundations, methods and practices. New
York: Peter Lang.

233| P a g e
Gable, G., Sedera, D., and Chan, T. (2003), Enterprise systems success: a measurement model.
In Proceedings of the Twenty-Fourth International Conference on Information Systems
Gable, G. G. (1994). Integrating Case Study and Survey Research Methods: An Example in
Information Systems, European Journal of Information Systems, 3 (2), 112-126.
Gable, G.G., Sedera, D., & Chan, T. (2008). Re-conceptualizing information system success:
The IS-impact measurement model. Journal of the Association for Information Systems,
9(7), 377-408.
Garg, P., & Garg, A. (2013). An empirical study on Critical Failure factors for Enterprise
Resource Planning implementation in Indian Retail Sector. Business Process
Management Journal, 19(3), 1-20.
Gatian, A.W. (1994). Is user satisfaction a valid measure of system effectiveness?, Information
& Management, 26(3), 119–131.
Gay, G., & Dringus, L. (2012). Measuring technological e-learning readiness and effectiveness
in the online learning environment. 18th Annual Sloan-C International Conference on
Online Learning.
Gefen, D., and Keil, M, (1998). The impact of developer responsiveness on perceptions of
usefulness and ease of use: an extension of the technology of the technology acceptance
model. The Data Base for Advances in Information Systems, 29(2), 35–49.
Gefen, D., Rigdon, E.E., & Straub, D. (2011). Editor's Comment: An Update and Extension to
SEM Guidelines for Administrative and Social Science Research. MIS Quarterly, 25(2).
Gefen, D., Straub. D., and. Boudreau. M.C. (2000).Structural equation modeling and regression:
Guidelines for research practice. Communications of the AIS.4(7),1-78.
Gelderman, M. (2002). Task difficulty, task variability and satisfaction with management
support systems. Information & Management, 39(7), 593–604.
Gist, M. E., Schwoerer, C., and Rosen, B. (1989).Effects of Alternatives Training Methods on
Selfefficacy and Performance in Computer Software Training. Journal of Applied
Psychology, 74(6), 884-891.

Glass, E.L. (2005). It Failure Rates: 70% or 10-15%? IEEE Software, 22(3), 112-111.

Glassser, B., & Strauss, A. (1967). The discovery of Grounded theory: Strategies of qualitative
research. London: Wiedenfeld and Nicholson.

234| P a g e
Gombachika, H.S.H. (2013). ICT readiness and acceptance among TEVT students in University
of Malawi. Campus-Wide Information Systems, 30(1), 35–43.
Goodhue, D. L., Klein, B. D., & March, S. T. (2000). User evaluations of IS as surrogates for
objective performance. Information & Management, 38, 87–101.
Goodhue, D.L., and Thompson, R. (1995). Task-technology fit and individual performance. MIS
Quarterly, 19(2), 213–236.
Gorla, N., Somers, T.M., & Wong, B. (2010).Organisational impact of system quality,
information quality, and service quality. The Journal of Strategic Information Systems,
19(3), 207-228.
Govindasamy, T. (2001). Successful implementation of e-learning: Pedagogical considerations.
The Internet and Higher Education, 4(3), 287–299.
Graham, C. R. (2011). Theoretical Considerations for Understanding Technological Pedagogical
Content Knowledge (TPACK). Computers & Education, 57(2011), 1953–1969.
Guimaraes, T, Yoon Y, and Clevenson, A. (1996). Factors important to expert system success: a
field test. Information & Management, 30(3),119–130.
Guimaraes, T, and Igbaria, M. (1997). Client/server system success: exploring the human side.
Decision Sciences, 28(4), 851–876.
Gunga, S., & Ricketts, I. (2007). Facing the challenges of e-learning initiatives in African
universities. British Journal of Educational Technology, 38(5), 896–906.
Gunn, C. (2010). Sustainability factors for e-learning initiatives. ALT-J, Research in Learning
Technology, 18(2), 89-103.
Guri, R. S. (2005). ‗Distance education‘and ‗e-learning‘: Not the same thing. Higher Education,
49(4), 467-493.
Hagos,Y., & Negash, S. (2014). The adoption of e-learning systems in low income countries: the
case of Ethiopia. International Journal for Innovation, Research and Education, 2(10),
79-82.
Hair, J.F., Hult, G.T.M., Ringle, C.M., Sarsdt, M.,Thiele, K.O. (2017). Mirror, mirror on the
wall: a comparative evaluation of composite-based structural equation modeling methods.
Journal of the Academy of Marketing Science, 45, 616-632.
Hair Jr, J. F., Hult, G. T. M., Ringle, C., & Sarstedt, M. (2013). A primer on partial least squares
structural equation modeling (PLS-SEM). Sage Publications.

235| P a g e
Hair, J.F., Ringle, C.M., & Sarstedt, M. (2012). Partial Least Squares: The Better Approach to
Structural Equation Modeling? Long Range Planning, 45(5-6), 312-319.
Hair, J.F., Ringle, C.M., & Sarstedt, M. (2011). PLS-SEM: Indeed a silver bullet. The Journal of
Marketing Theory and Practice, 19(2), 139-152.
Hair, J., Black, W., Babin, B., Tatham, R. and Anderson, R. (2010). Multivariate data analysis.
Prentice Hall, Upper Saddle River,NJ.
Hajir, J. A., Obeidat, B. Y., & Al-dalahmeh, M. A. (2015). The Role of Knowledge Management
Infrastructure in Enhancing Innovation at Mobile Telecommunication Companies in
Jordan. European Journal of Social Sciences, 50 (3), 313-330.
Halawi, L.A, Mccarthy, R.V., and Aronson, J.E. (2007). An empirical investigation of
knowledge-management systems‘ success. The Journal of Computer Information
Systems, 48(2), 121–135.
Hakim, C. (2000) Research Design: Successful Designs for Social and Economic Research (2nd
edn). London:Routledge.
Hare, H. (2007). Survey of ICT and Education in Africa Ethiopia ICT in Education in Ethiopia.
World Fact Book.
Hariri, M. M. (2013). Effective use of LMS (Learning Management System) in teaching
graduate geology course at KFUPM, Saudi Arabia. Paper presented at Fourth
International Conference on the e-learning.
Hartwick, J., & Barki, H. (1994). Explaining the role of user participation in information system
use. Management Science, 40(4), 440–465.
Hassanzadeh,A.F., Kanaani, S., & Elahi, A. (2012). Model for measuring e-learning systems
success in universities. Expert Systems with Applications, 39, 10959–10966.
Henseler, J., Hubona, G.S., & Ray, P.A. (2016). Using PLS path modeling in new technology
research: updated guidelines. Industrial Management & data Systems, 116 (1), 1-19.

Henseler, J., Dijkstra, T.K., Sarsdt, M., Ringle, C.M., Diamantopoulos, A., Straub, D. W.,
Ketchen, D. J., hair, J.F., Hult, G.T.M., & Calantone, R.J. (2014). Common beliefs and
reality about partial least squares: comments on Ronkko & Evermann (2013).
Organizational Research Methods, 17(2), 182-209.
Henseler, J., Ringle, C. E. & Sinkovics, R. (2009). The use of partial least squares path modeling
in international marketing. Advances in International Marketing (AIM), 20, 277-320.

236| P a g e
Hevner, A.R., March, S.T., & Park, J. (2004). Design Research in Information Systems research.
MIS Quarterly, 28(1), 75-105.
Hill, T., Smith, N.D., and Mann, M.F. (1986). Communicating Innovations: Convincing
Computer Phobics to Adopt Innovative Technologies, Advances in Consumer Research
13, 419-422.
Hirscheim, R. (2005). The Internet-based education bandwagon: Look before you leap.
Communications of the ACM, 48(7), 97-101.
Ho, C., and Dzeng, R. (2010). Construction safety training via e-Learning: Learning
effectiveness and user satisfaction. Computers & Education, 55, 858–867.
Holcomb, L. B., Brown, S. W., Kulikowich, J. M. & Zheng, D. (2003). Raising educational
technology self-efficacy through assessment. American Psychological Association
Conference.
Holman, R., & Glas, C.A.W. (2005). Modeling non‐ignorable missing‐data mechanisms with
item response theory models. British Journal of Mathematical and Statistical Psychology,
58(1), 1-17.
Holsapple,C.W., & Lee-Post, A.L. (2006). Defining, Assessing, and Promoting E-Learning
Success: An Information Systems Perspective. Decision Sciences Journal of Innovative
Education, Vol.4.
Hong, K. S. (2002). Relationships between students‘ and instructional variables with satisfaction
and learning from a Web-based course. Internet and Higher Education, 5, 267–28.
Hong, W, Thong, J.Y.L, Wong, W.M, and Tam, K.Y.(2001). Determinants of user acceptance of
digital libraries: an empirical examination of individual differences and system
characteristics. Journal of Management Information Systems, 18(3), 97–124.
Howard, G. S., & Mendelow, A. L. (1991). Discretionary use of computers: An empirically
derived explanatory model. Decision Sciences, 22, 241-265.
Hsieh, P.A.J., & Cho, V. (2011).Comparing e-Learning tools‘ success: The case of instructor–
student interactive vs. self-paced tools. Computers & Education, 57(3), 2025-2038.
Hsieh, J.J.P.A., and Wang, W. (2007). Explaining employees‘ extended use of complex
information systems. European Journal of Information Systems, 16(3), 216–227.

237| P a g e
Hsu, S.H, Chen, W.H, & Hseih, M.J. (2006). Robustness testing of PLS, LISREL, EQS, and
ANN-based SEM for measuring customer satisfaction. Total Quality management &
Business Excellence, 17 (3), 355-372.
Hsu, W.K., & Huang,S.S. (2006). Determinants of Computer self-efficacy: An examination of
learning motivations and learning environments. Journal of Educational Computing, 35
(3), 245-265.
Hung,D. (2001). Theories of Learning and Computer-Mediated Instructional Technologies.
International Council for Education Media.
Hutchins, H.M and Hutchison, D. (2008). Cross-disciplinary contributions to e-learning design: a
tripartite design model. Emerald Insight, 20 (5), 364-380.
Igbaria, M., and Tan, M. (1997). The consequences of information technology acceptance on
subsequent individual performance. Information & Management, 32(3), 113–121.
Igbaria, M., & Parasuraman, S. (1989). A path analytic study of individual characteristics,
computer anxiety and attitudes toward microcomputers. Journal of Management, 15,
373-388.
Iivari, J. (2005). An empirical test of DeLone-McLean model of information systems success.
The DATA BASE for Advances in Information Systems, 36(2), 8–27.
Imenda, S.(2014). Is there a conceptual difference between theoretical and conceptual
frameworks?. Journal of Social Science, 38(2): 185-195.
Islam, A.K.M. (2011). Extending information system continuance theory with system quality in
e-learning context. Paper presented at the AMCIS. Detroit, U.S.A.
Jagannathan, V., Balasubramanian, S., & Natarajan, T. (2018). An extension to the Delone and
Mclean information systems success model and validation in the internet banking
context. In M. Khosrow-Puour (Ed.), Encyclopedia of Information Science and
Technology (4th ed.), pp. 49-60.
Jenkins, M., Browne, T., Walker, R., & Hewitt, R. (2011). The development of technology
enhanced learning: findings from a 2008 survey of UK higher education institutions.
Interactive Learning Environments, 19(5), 447-465.
Jennex, M.E., and Olfman, L. (2002). Organizational memory/knowledge effects on
productivity: a longitudinal study. In Proceedings of the Thirty-Fifth Hawaii

238| P a g e
International Conference on System Sciences (SPRAGUE JR RH, Ed), p 109, IEEE
Computer Society Press, Big Island, Hawaii, USA.
Jiang, J., Klein, G., & Carr, C. (2002). Measuring information system service quality:
SERVQUAL from the other side. MIS Quarterly, 26(2), 145–166.
Johns, G. (2006). The Essential Impact of Context on Organizational Behavior. Academy of
Management Review, 31(2), 386-408.
Khan, R. and Cannell, C. (1957). The Dynamics of Interviewing. New York and Chichester:
Wiley.
Kaplan, R., & Norton, D. (1996).Translating Strategy into Action: The Balanced Scorecard.
Harvard Business School Press, Boston.
Karaaslan, A.I. (2013). The effect of banking personnel‘s access to e-learning opportunities on
their professional achievement. Turkish online journal of educational technology, 12(2),
269-280.
Karsten, R. A., and Roth, R. M. (1998), The Relationship of Computer Experience and
Computer Self-efficacy to Performance in Introductory Computer Literacy Courses.
Journal of Research on Computing in Education, 31(1), 14-24.
Kashorda, M., & Waema, T. (2014). E-Readiness survey of Kenyan Universities (2013) report.
Nairobi: Kenya Education Network.
Kattoua, T; Al-Lozi, M; Alrowwad, A. (2016). A Review of Literature on E-Learning Systems in
Higher Education. International Journal of Business Management and Economic
Research, 7(5), 754-762.
Keegan, D. (2005). The incorporation of mobile learning into mainstream education and training.
Paper presented at the World Conference.
Ketema, T., & Nirmala, M. (2015). The Impact of E-Learning to Enhance Quality of Education:
The Case of Adama Science and Technology University. African Journal of Computing
& ICT, 8(1), 155-162.
Kettinger, W.J., and Lee, C.C. (1994). Perceived service quality and user satisfaction with the
information services function. Decision Sciences, 25(5), 737–766.
Khalil, O., and Elkordy, M.M. (1999). The relationship between user satisfaction and systems
usage: empirical evidence from Egypt. Journal of End User Computing, 11(2), 21–28.

239| P a g e
Khan, B. (2005). Managing e-learning strategies: Design, delivery, implementation and
evaluation. Hershey, London, Melbourne, Singapore: Information Science Publishing.
Khan, M. S., Hasan, M., & Kum, C. (2012). Barriers to the Introduction of ICT into Education in
Developing Countries: The Example of Banglidesh. Clement International Journal of
Instruction, 5(2).
Kidd, P.S & Parshal, M.B. (2000). Getting the focus and the Groups: Enhancing Analytical
Rigor in Focus Group Research. Qualitative Health Research, 10 (3). 293-308.
Kim, J., Lee, J., Han, K., and Lee, M. (2002). Business as buildings: metrics for the architectural
quality of internet businesses. Information Systems Research, 13(3), 239–254.
Kirk, J., & Miller, M. (1986). Reliability and validity in qualitative research. Beverly Hills, CA:
Sage.
Kirkwood, A. (2009). E‐learning: you don't always get what you hope for. Technology,
Pedagogy and Education, 18(2), 107–121.
Klein, R. (2007). An empirical examination of patient-physician portal acceptance. European
Journal of Information Systems, 16(6), 751–760.
Koehler, M. J., & Mishra, P. (2009).What is technological pedagogical content knowledge?
Contemporary Issues in Technology and Teacher Education, 9(1).
Koehler, M. J., & Mishra, P. (2008). Introducing TPCK. AACTE Committee on Innovation and
Technology. The handbook of technological pedagogical content knowledge (TPCK) for
educators (pp. 3–29). Mahwah, NJ: Lawrence Erlbaum Associates.
Koehler, M; Mishra, P.,and Cain, W. (2013). What Is Technological Pedagogical Content
Knowledge (TPACK)?. Journal of Education, 3,1 9 3.
Kositanurit, B, Ngwenyama, O., and Osei, K. (2006). An exploration of factors that impact
individual performance in an ERP environment: an analysis using multiple analytical
techniques. European Journal of Information Systems, 15(6), 556–568.
Koufaris, M. (2002). Applying the technology acceptance model and flow theory to online
consumer behavior. Information Systems Research, 13(2), 205–223.
Krosnick, J.A., Holbrook, A.L., Berent, M.K., Carson, R.T., Hanemann, W.M., Kopp, R.J., &
Smith, V.K. (2002). The impact of " no opinion" response options on data quality: Non-
attitude reduction or an invitation to satisfice?. Public Opinion Quarterly, 66(3), 371-403.
Krueger, Richard A. (1998). Developing Questions for focus groups. Thousand Oaks, CA: Sage.

240| P a g e
Kulkarni, U., Ravindran, S., & Freeze, R. (2006). A knowledge management success model:
theoretical development and empirical validation. Journal of Management Information
Systems, 23(3), 309–347.
Landrum, H. and Prybutok, V. (2004). A service quality and success model for the information
service industry. European Journal of Operational Research, 156 ( 3), 628– 642.
Larsen, T. J., Sorebo, A. M., & Sorebo, Ø. (2009). The role of task-technology fit as users'
motivation to continue information system use. Computers in Human Behavior,
25(3),778-784.
Leclercq, A. (2007). The perceptual evaluation of information systems using the construct of
user satisfaction: case study of a large French group. The Database for Advances in
Information Systems, 38(2), 27–60.
Lee, B., Yoon, J. and Lee, I. (2009). Learners‘ acceptance of e-learning in South Korea:
Theories and results. Computers & Education, 53 (4), 1320–1329.
Lee, T., & Lee, J. (2006). Quality assurance of web based e-learning for statistical
education.COMPSTAT: Proceedings in Computational Statistics: 17th Symposium,
Rome.
Lee, J.K., & Lee, W.K. (2008).The relationship of e-Learner's self-regulatory efficacy and
perception of e-Learning environmental quality. Computers in Human Behavior, 24(1),
32-47.
Lee, M. (2010). Explaining and predicting users‘ continuance intention toward e-learning: An
extension of the expectation – confirmation model. Computers & Education, 54(2), 506–
516.
Lee-Post, A. (2009). E-Learning Success Model: an Information Systems Perspective. Electronic
Journal of e-Learning, 7(1), 61 – 70.
Legris, P., Ingham, J., Collerette, P. (2003). Why do people use information technology? A
critical review of the technology acceptance model. Information & Management, 40(3),
191–204.
Leonard,B. D., and Sinha, D.K. (1993). Developer–user interaction and user satisfaction in
internal technology transfer. Academy of Management Journal, 36(5), 1125–1139.
Lewis, D. (1970). How to Define Theoretical Terms. The Journal of Philosophy, 67 (13), 427-
446.

241| P a g e
Li, F. W., Lau, R. W., & Dharmendran, P. (2009). A three-tier profiling framework for adaptive
e-learning. Proceedings of the 8th International Conference on Advances in Web Based
Learning, Aachen.
Liao, H., & Lu, H. (2008). Richness versus parsimony antecedents of technology adoption model
for E-learning websites.
Liaw, S. S., Chen, G. D., & Huang, H. M. (2008). Users‘ attitudes toward Web-based
collaborative learning systems for knowledge management. Computers & Education,
50(3), 950–961.
Liaw, S. S., & Huang, H. M. (2011). A study of investigating learners‘ attitudes toward e-
learning. 2011 5th International Conference on Distance Learning and Education,
12(2011), IACSIT Press, Singapore, 28-32.
Liaw,S.S., Huang, H.M., and Chen, G.D. (2007). Surveying Instructor and Learner Attitudes
Towards E-learning. Computer& education, 49(4), 1066-1080.
Limayem, M., & Cheung, C. M. K. (2008). Understanding information systems continuance:
The case of Internet-based learning technologies. Information & Management, 45(4),
227-232.
Lin, H.F. (2008). Determinants of successful virtual communities: Contributions from system
characteristics and social factors. Information & Management, 45 (8), 522–527.
Lin, W. and Wang, C.(2012). Antecedences to continued intentions of adopting e-learning
system in blended learning instruction: A contingency framework based on models of
information system success and task-technology fit. Computers & Education, 58 (1), 88–
99.
Lin,H.F. (2007). Measuring Online Learning Systems Success: Applying the Updated DeLone
and McLean Model. Cyberpsychology & Behavior, 10(6), 817-820.
Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic Inquiry. Beverly Hills, CA: Sage.
Lucas, H. C., & Spitler, V. K. (1999). Technology use and performance: A field study of broker
workstations. Decision Sciences, 30, 291-311.
Lwoga, E. T. (2011). Making Web 2.0 Technologies work for Higher Learning Institutions in
Africa. Campus-Wide Information Systems, 29(2), 90-107.
Lwoga, E. (2012). Making learning and web 2.0 technologies work for higher learning
institutions in Africa. Campus-Wide Information Systems, 29 (2), 90–107.

242| P a g e
Lwoga, E. (2014). Critical success factors for adoption of web-based learning management
systems in Tanzania. International Journal of Education and Development using
Information and Communication Technology (IJEDICT), 10(1), 4-21.
Lykins, L. (2011). Mobile learning: learning in the palm of your hand.
MacKenzie, S.B., Podsakoff, P.M, and Podsakoff, N.P. (2011). Construct Measurement and
Validation Procedures in MIS and Behavioral Research: Integrating New and Existing
Techniques. MIS Quarterly. 35(2), 293-334.
Malikowski, S. R., Thompson, M. E., & Theis, J. G. (2007). A model for research in to course
management systems: Bridge technology & learning theory. Educational Computing
Research, 36(2), 149–173.
Mallat, N. (2007). Mobile Banking Services. Communications of the ACM, 47 (5), 42–46.
Manduku, J., Kosgey, A., & Sang, H. (2012). Adoption and use of ICT in enhancing
management of public secondary schools: A survey of Kesses zone secondary schools in
Wareng District of Wasin Gishu County, Kenya. Available at www.iiis.org.
Maqableh, M. M., Mohammed, A. B., & Masa‘deh, R. (2016). Modeling Teachers Influence on
Learners Self-Directed Use of Electronic Commerce Technologies outside the
Classroom. Scientific Research and Essays, 11 (3), 29-41.
Marble, R. (2003). A system implementation study: Management Commitment to Project
Management. Information & Management, 41(1), 111–123.
Markus, M.L, and Keil, M. (1994). If we build it, they will come: designing information systems
that people want to use. Sloan Management Review, 35(4), 11–25.
Marshall, S. (2012). Improving the quality of e-learning: lessons from the eMM. Journal of
Computer Assisted Learning, 28(1), 65-78.
Masa‘deh, R., Tarhini, A., Bany, M. A., & Maqableh, M., (2016). Modeling Factors Affecting
Student‘s Usage Behaviour of ELearning Systems in Lebanon. International Journal of
Business and Management, 11 (2), 299-312.
Mason, R. (1998). Globalizing education: Trends and application. London: Routledge.
Mason, R. & Rennie F. (2006). E-learning: The key concepts. Abingdon Great Britain:
Routledge.
Masrom, M. (2007). Technology Acceptance Model and E-learning. 12th International
Conference on Education, Sultan Hassanal Bolkiah Institute of Education, 1-10.

243| P a g e
Masrom, N., Zainon, O., & Rahiman, R. (2008). Critical success in e-learning: An examination
of technological and institutional support factors. International journal of Cyber Society
and Education, 1(2), 131-142.
Matti, T., Ngumbuke, F. and Kemppainen, J. (2010). Infrastructure, human capacity, and high
hopes: A decade of development of e-Learning in a Tanzanian HEI. Revista de
Universidad y Sociedad del Conocimiento, 7 (1), 7–20.
Mcgill, T, Hobbs, V., and Klobas, J. (2003). User-developed applications and information
systems success: a test of DeLone and McLean‘s model. Information Resources
Management Journal, 16(1), 24–45.
Mcgill, T.J., and Klobas, J.E. (2005). The role of spreadsheet knowledge in user-developed
application success. Decision Support Systems, 39(3),355–369.
McGill, T. J., Klobas, J. E., & Renzi, S. (2014). Critical success factors for the continuation of
elearning initiatives. The Internet and Higher Education, 22, 24 – 36.
McGill, T., Klobas, J., & Renzi, S. (2011). LMS use and instructor performance: The role of
task-technology fit. International Journal on E-Learning, 10(1), 43-62.
McPherson, M. A., & Nunes, J. M. (2008). Critical issues for e-learning delivery: what may
seem obvious is not always put into practice. Journal of Computer Assisted Learning,
24(5), 433-445.
Miles, M. (1979). Qualitative data as an attractive nuisance: The problem of analysis.
Administrative Science Quarterly, 24, 590-601.
Miles, M. B., & Huberman, A. M. (1984). Qualitative data analysis: An expanded source book
(2nded.). Thousand Oaks, CA: Sage.
Miles,M.B., & Huberman, A.M.(1994). Qualitative data analysis: A sourcebook of new methods
Beverly Hills, CA:Sage.
Mintzberg, H.(1979). An emerging strategy of ―direct‖ research. Administrative Science
Quartery, 24, 580-589.
Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A
framework for integrating technology in teacher knowledge. Teachers College Record,
108(6), 1017–1054
Mohammadi, H. (2015). Investigating users‘ perspectives on e-learning: An integration of TAM
and IS success model. Computers in Human Behavior, 45, 359-374.

244| P a g e
Mola, A., and Lickers, P.S. (2001), E-commerce systems success: an attempt to extend and re-
specify the DeLone and McLean model of IS success. Journal of Electronic Commerce
Research, 2(4), 131–141.
Moon, J., & Kim, Y. (2001). Extending the TAM for a World-Wide-Web context. Information &
Management, 38, 217–230.
Moravec, T., Stepanek, P., & Valenta, P. (2015).The Influence of Using E-Learning Tools on the
Results of Students at the Tests. Procedia Social and Behavioural Sciences, 176, 81-86.
Morgan, D.M, & Bottorff, J.L. (2010). Advancing Our Craft: Focus Group methods and Practice.
Qualitative Health Research, 20 (5), 579-581.
Morgans,D. (1997). Focus groups. Annual Review of Sociology, 22, 129-152.
Moritscher, F. (2006). E-Learning Theories in Practice: A Comparison of three Methods. Graz:
Austria.
Morris, C. (2003) Quantitative Approaches in Business Studies (6th ed.). Harlow: Financial
Times Prentice Hall.
Morris, T. and Wood, S. (1991). Testing the survey method: Continuity and change in British
industrial relations. Work Employment and Society, 5(2), 259–82.
Mothibi, G. (2015). A Meta–Analysis of the Relationship between E-Learning and Students‘
Academic Achievement in Higher Education. Journal of Education and Practice, 6 (9),
6-10.
Mott, J. (2010). Envisioning the post-LMS era: the Open Learning Network. Educause quarterly,
33(1), 1–9.
Mtebe, J. S., & Raisamo, R. (2014). A model for assessing Learning Management System
Success in higher education in sub-Saharan countries. The Electronic Journal of
Information Systems in Developing Countries, 61.
Mueller, B., & Urbach, N. (2013). The why, what, and how of theories in IS research. Thirty
Fourth International Conference on Information Systems, Milan 2013.
Muijs, D. (2004). Doing quantitative research in education London: Sage Publications Ltd.
Myers, M. D. (2013). Qualitative research in business and management. Sage.
Nagunwa, T. and Lwoga, E. (2013). Developing an e-learning strategy to implement medical
competency based curricula: Experiences from Muhimbili University of Health and

245| P a g e
Allied Sciences. International Journal of Education and Development using Information
and Communication Technology, 8 (3), 7–21.
National Research Council Committee on InformationTechnology Literacy. (1999). Being fluent
with information technology literacy. Washington, DC: National Academy
PressDuncker, K. (1945). On problem solving. Psychological Monographs, 58(5), 1–110.
Nawaz, A. and Kundi, G. M. (2010). Demographic implications for the user-perceptions of e-
learning in higher education institutions of N-W.F.P, Pakistan. The Electronic Journal on
Information Systems in Developing Countries, 41(5), 1-17.
Ndume, V., Tilya, F. and Twaakyondo, H. (2008). Challenges of adaptive e-learning at higher
learning institutions: A case study in Tanzania. International Journal of Computing and
ICT Research, 2 (1), 47–59.
Nega, M. (2017). The Public-Private Divide in Ethiopian Higher Education: Issues and Policy
Implications. Universal Journal of Educational Research, 5(4), 591-599.
Neuman, W. L. (2006). Social Research Methods: Qualitative and Quantitative Approaches, 6th
Edition, Pearson International Edition, USA.
Neuman, W.L. (2005) Social Research Methods (6th ed.).London: Pearson.
Nihuka, K. and Voogt, J. (2012). Collaborative e-learning course design: Impacts on instructors
in the Open University of Tanzania. Australasian Journal of Educational Technology, 28
(2), 232–248.
Noesgaard, S.S, and Orngreen, R. (2015). The effectiveness of e-learning: an exploratory and
integrative review of the definitions, Methodology and factors that promote e-learning
effectiveness. The Electronic Journal of E-learning, 13(4), 278-290.
Nunally, J.C., & Bernstein, I.H. (1994). Psychometric theory (3rd ed.). New York: McGraw-Hill.
Obeidat, B., Al-Suradi, M., & Tarhini, A. (2016). The Impact of Knowledge Management on
Innovation: An Empirical Study on Jordanian Consultancy Firms. Management Research
Review, 39 (10), 22-42.
Olatunbosun, O., Olusoga, F. A., & Samuel, O. A. (2015). Adoption of eLearning Technology in
Nigeria Tertiary Institution of Learning. British Journal of Applied Science &
Technology, 10(2), 1-15.
Oluniyi, O., and Taiwo, A. T. (2016). Adoption of E-Learning among Instructors in Higher
Institutions in Nigeria: A Case Study of Obafemi Awolowo University, Ile-Ife, Nigeria.

246| P a g e
The International Journal of Management Science and Information Technology
(IJMSIT), 20(Apr-Jun 2016), 52-73.
Ong, C.S., Lai, J.Y., and Wang, Y.S. (2003). Factors affecting engineers‘ acceptance of
asynchronous e-learning systems in high-tech companies. Information and Management
41(2004), 795-804.
Oye, N. D., Salleh, M., & Iahad, N. A. (2011). Challenges of e-learning in Nigerian university
education based on the experience of developing countries. International Journal of
Managing Information Technology, 3(2), 39-48.
Ozkan, S., & Koseler, R. (2009). Multi-dimensional students' evaluation of e-learning systems in
the higher education context: An empirical investigation. Computers & Education, 53(4),
1285-1296.
Ozkan, S., Koseler, R., & Baykal, N. (2009). Evaluating learning management systems:
Adoption of hexagonal e-learning assessment model in higher education. Transforming
Government: People, Process and Policy, 3(2), 111-130.
Oztekin, A., Kong, Z. J., and Uysal,O. (2010). UseLearn: A novel checklist and usability
evaluation method for eLearning systems by criticality metric analysis. International
Journal of Industrial Ergonomics, 40, 455-469
Palmer, J. (2002). Web site usability, design and performance metrics. Information Systems
Research, 13(1), 151–167.
Pan, S. L., & Tan, B. (2011). Demystifying case research: A structured–pragmatic–situational
(SPS) approach to conducting case studies. Journal of Information and Organization, 21,
161–176.
Panda, S.,& Mishra, S. (2007). E-Learning in a Mega Open University: Faculty attitude, barriers
and motivators. Educational Media International. Educational Media International,
44(4),323-338.
Papp, R. (2000), Critical success factors for distance learning. Paper presented at the Americas
Conference on Information Systems, Long Beach, CA, USA.
Park, S., Zo, H., Ciganek, A.P., & Lim, G.G. (2011).Examining success factors in the adoption
of digital object identifier systems. Electronic Commerce Research and Applications,
10(6), 626-636.

247| P a g e
Parker, M. A., & Martin, F. (2010). Using virtual classrooms: Student perceptions of features
and characteristics in an online and a blended course. MERLOT Journal of Online
Learning and Teaching, 6(1), 135-147.
Parkes, M., Stein, S., & Reading, C. (2015). Student preparedness for university e-learning
environments. The Internet and Higher Education, 25, 1 – 10.
Patton, M. Q. 1990. Qualitative Evaluation and Research Methods. Newbury Park, CA: Sage.
Petter, S., DeLone, W., & McLean, E. R. (2013). Information Systems Success: The Quest for
the Independent Variables. Journal of Management Information Systems, 29(4), 7–62
Petter, S., and Mclean. (2009). A meta-analysis assessment of the DeLone and McLean IS
success model: An examination of IS success at the individual level. Information &
Management, 46, 159-166.
Petter, S., DeLone, W., McLean, E. (2008). Measuring information systems success: models,
dimension, measures and interrelationships. European Journal of Information Systems,
17, 236–263.
Piccoli, G., Ahmad, R., & Ives, B. (2001), Web-based virtual learning environments: a research
framework and a preliminary assessment of effectiveness in basic IT skill training. MIS
Quarterly, 25(4), 401–426.
Pishva, D., Nishantha, G., & Dang, H. (2010). A survey on how Blackboard is assisting
educational institutions around the world and the future trends. Paper presented at the12th
International Conference on Advanced Communication Technology (ICACT). Retreived
from http://www.icact.org/.
Pitt, L.F, Watson, R.T., and Kavan, C.B. (1995). Service quality: a measure of information
systems effectiveness. MIS Quarterly, 19(2), 173–187.
Pušnik, M., Šumak, B. and Heric, M. (2011). A meta-analysis of e-learning technology
acceptance: The role of user types and e-learning technology types. Computers in Human
Behavior, 27, 2067–2077.
Rai, A., Lang, S., & Welker, R. (2002). Assessing the validity of IS success models: an empirical
test and theoretical analysis. Information Systems Research, 13(1), 50–69.
Ramayaha, T., Ahmad, N. H., & Lo, M. C. (2010). The role of quality factors in intention to
continue using an e-learning system in Malaysia. Procedia-Social and Behavioral
Sciences, 2(2), 5422-5426.

248| P a g e
Ramayaha, T. and Lee, J. (2012). System characteristics, satisfaction and e-learning usage: A
structural equation model (SEM). TOJET: The Turkish Online Journal of Educational
Technology, 11(2), 26–28.
Rhema, A. and Miliszewska, I. (2010). Towards e-learning in higher education in Libya. Issues
in Informing Science and Information Technology, 7, 423–437.
Ringle, C., Sarstedt, M. and Straub, D. (2012). A Critical Look at the Use of PLSSEM in MIS.
MIS Quarterly. 36(1),3-13.
Rivard, S., Poirier, G., Raymond, L., and Bergrron, F. (1997). Development of a measure to
assess the quality of user-developed applications. The database for Advances in
Information Systems, 28(3), 44–58.
Robson, C. (2002). Real World Research (2nd ed.). Oxford: Blackwell.
Roca, J.C., Chiu, C.-M.and Martínez, F.J. (2006). Understanding e-learning continuance
intention: An extension of the Technology Acceptance Model. International Journal of
Human-Computer Studies, 64 (8), 683–696.
Romiszowski, A., & Mason, R. (2004). Computer-mediated communication. In D. H. Jonassen
(Ed.), Handbook of research on educational communications and technology (pp. 397-
432). Mahwah, NJ: Lawrence Erlbaum
Rosenberg, M. (2001). E-Learning: Strategies for delivering knowledge in the digital age.
Columbus, OH: Mcgraw-Hill.
Rubin, H. (2004). In to the light. In CIO Magazine.http://www.cio.com.au/.
Russel, C. (2008). E-learning adoption in a campus University: as a complex adaptive system.
University of Leicester.
Saade, R. and Kira, D. (2009). Computer Anxiety in e-learning: The effects of Computer Self-
Efficacy. Journal of Information Technology Education, Vol. 8.
Sabau, I. (2005). Effective asynchronous communication online. Available at http://breeze.ucalgary.ca
Saleh, H.K. (2008). Computer self-efficasy of University faculty in Lebanon. Educational
technology research and development, 56(2), 229-240.
Salter, S. M., Karia, A., Sanfilippo, F. M., & Clifford, R. M. (2014). Effectiveness of E-learning
in pharmacy education. American journal of pharmaceutical education, 74(8).
Samarawickrema, G., & Stacey, E. (2007). Adopting web-based learning and teaching: A case
study in higher education. Distance Education, 28 (3), 313-333.

249| P a g e
Sangra,A., Vlachopoulos, D., and Cabrera, N. (2012). Building an Inclusive Definition of E-
Learning: An Approach to the Conceptual Framework. The International review of
research in open and distance learning. 13(2). 145-159.
Saunders M.,Lewis P. and Thornihill (2009). Research methods for business students. 5th ed.,
Prentice Hall (SC).
Schwarzer, R. (1992). Self-efficacy: Thought control of action. London: Hemisphere.
Sclater, N. (2008). Web 2.0, personal learning environments, and the future of learning
management systems.
Seddon, P. (1997). A re-specification and extension of the DeLone and McLean model of IS
success. Information Systems Research, 8(3), 240–253.
Seddon, P., & Kiew, M. (1996). A partial test and development of DeLone and McLean‘s model
of IS success. Australian Journal of Information Systems, 4(1), 90–109.
Seddon, P., & Yip, S. (1992). An empirical evaluation of user information satisfaction (UIS)
measures for use with general ledger accounting software. Journal of Information
Systems, 6(1), 75–98.
Seddon, P., Graeser, V., Andwilcocks, L. (2002). Measuring organizational IS effectiveness: an
overview and update of senior management perspectives. The Data Base for Advances in
Information Systems, 33(2), 11–28.
Seddon, P., Staples, S., Patnayakuni, R., & Bowtell, M. (1999). Dimensions of information
systems success. Communications of the Association for Information Systems, 2, 2–39.
Sedera, D., Gable, G., and Chant, T. (2004). A factor and structural equation analysis of the
enterprise systems success measurement model‖, In Proceedings of the Twenty-Fifth
International Conference on Information Systems (Appelgate L, Galliers R and Degross
JI, Eds), p 449, Association for Information Systems, Washington, DC, USA.
Sedera, D., Tan, F., & Dey, S. (2007). Identifying and evaluating the importance of multiple
stakeholders perspective in measuring ES-success. Paper presented at the European
Conference on Information Systems, 12-14 June, Gothenburg Sweden.
Sekaran, U. (2006). Research Methods for Business: A skill building approach. 4 th ed. New
Delhi:John Wiley & Sons.
Selim, H.M. (2007). Critical success factors for e-learning acceptance: confirmatory factor
models. Computers and Education, Vol. 49, No. 2, pp.396–413.

250| P a g e
Shanks, G. (2002). Guidelines for conducting positivist case study research in information
systems. AJIS. special issue, 76-85.
Shaw, N.C, Delone, W.H., and Niederman, F. (2002). Sources of dissatisfaction in end-user
support: an empirical study. The Data Base for Advances in Information Systems, 33(2),
41–56.
Shee, D. Y., and Wang Y. S. (2008). Multi-criteria evaluation of the web-based e-learning
system: A methodology based on learner satisfaction and its applications. Computers and
Education, 50, 894-905.
Shih, H.P. (2004). Extended technology acceptance model of internet utilization behavior.
Information & Management, 41(6), 719–729.
Shmueli, G., Ray, D., Manuel, J., Estrada, V., & Chatla, S.B. (2016). The elephant in the room:
evaluating the predictive performance of PLS models. Journal of Business Research, 69
(10), 4552-4564.
Shulman, L. (1986).Those who understand: Knowledge growth in teaching. Educational
Researcher, 15(2), 4–14.
Shulman, L. S. (1987). Knowledge and teaching: Foundations of the new reform. Harvard
Educational Review, 57(1), 1–22.
Singh H. (2003). Building Effective Blended Learning Programs. Issue of Educational
Technology, 43 (6), 51-54.
Singh,K. (2007). Quantitative Social Research Methods. SAGE Publications.
Siritongthaworn, S., and Krairit, D. (2006). The study of e-learning technology implementation:
A preliminary investigation of universities in Thailand. Education and Information
Technologies, 11 (2), 137–160.
Skinner, B.F. (1974). About behaviorism. Knopf: New York.
Ssekakubo, G., Suleman, H. & Marsden, G. (2011). Issues of adoption: Have e-learning
management systems fulfilled their potential in developing countries? In Proceedings of
the South African Institute of Computer Scientists and Information Technologists
Conference on Knowledge, Innovation and Leadership in a Diverse, Multidisciplinary
Environment (pp. 231–238). Cape Town, South Africa.
Ssemugabi, S. (2006). Usability Evaluation of Web-Based E-learning Application: A study o
Two Evaluation Methods. Msc Dissertation, University of South Africa.

251| P a g e
Stangor, C. (2011). Research methods for the behavioural sciences (4th ed.). USA: Wadsworth
Pub Co.
Straub,D., Boudreau, M.C., Gefen,D. (2004). Validation guidelines for IS positivist research.
Communication of AIS. 13. 380-427.
Straub, D.W, Limayen, M, and Karahanna,E.E.(1995). Measuring system usage: implications for
IS theory testing. Management Science, 41(8), 1328–1342.
Straub, D., Loch, K., and Hill, C. (2001). Transfer of information technology to the Arab World:
A test of cultural influence modeling. Journal of Global Information Management, 9(4),
6–28.
Strauss, A. and Corbin, J. (2008). Basics of Qualitative Research (3 rd ed.). Thousand Oaks, CA:
Sage.
Strauss, A. (1987). Qualitative analysis for social scientists. Cambridge, England: Cambridge
University Press.
Subramanian, G.H. (1994). A replication of perceived usefulness and perceived ease-of-use
measurement. Decision Sciences, 25(5/6), 863–874.
Suddaby, R. (2006).From the editors: what grounded theory is not. Academy of Management
Journal, 49 (4), 633–42.
Suh, K, Kim, S and Lee, J. (1994). End-user‘s disconfirmed expectations and the success of
information systems. Information Resources Management Journal, 7(4), 30–39.
Sumak, B., Hericko, M. and Pušnik, M. (2011). Factors affecting the adoption of e-learning: A
meta-analysis of existing knowledge. The Third International Conference on Mobile,
Hybrid, and On-line Learning, 31–35.
Sun, P.C., Tsai, R.J., Finger, G., Chen, Y.Y., & Yeh, D. (2008). What drives a successful e-
Learning? An empirical investigation of the critical factors influencing learner
satisfaction. Computers & Education, 50(4), 1183-1202.
Szajna, B. (1993). Empirical evaluation of the revised technology acceptance model.
Management Science, 42(1), 85–92.
Tagoe, M. (2012). Students ‘ perceptions on incorporating e-learning into teaching and learning
at the University of Ghana. International Journal of Education and Development using
Information and Communication Technology, 8 (1), 91–103.

252| P a g e
Tai, D., Zhang, R., Chang, S., Chen, C. and Chen, J. (2012). A meta-analytic path analysis of e-
learning acceptance model. World Academy of Science, Engineering and Technology, 65,
pp. 104–107.
Tajul,S.B. (2013). Living With Enterprise Resource Planning (ERP): An Investigation of End
User Problems And Coping Mechanisms. PhD thesis, RMIT University.
Tarhini, A., Elyas, T., Akour, M. A., & Al-Salti, Z. (2016). Technology, Demographic
Characteristics and E-Learning Acceptance: A Conceptual Model Based on Extended
Technology Acceptance Model. Higher Education Studies, 6 (3), 72-89.
Tarus, J. (2011). Adoption of e-learning to support teaching and learning in Moi University.
Thesis presented in partial fulfillment of the requirements for the degree of Master of
Philosophy (Information Technology), Moi University.
Taylor, S., Todd, P. (1995). Decomposition and crossover effects in the theory of planned
behavior: a study of consumer adoption intentions. International Journal of Research in
Marketing, 12, pp. 137–155.
Thatcher, J. B., and Pamela, L. P. (2002). An empirical examination of individual traits. Saga.
Thompson, A.A., & Strickland, A.J. (2001). Crafting and Executing Strategy: Text and
Readings. New York: McGraw-Hill.
Thurmond, V. A., Wambach, K., & Connors, H. R. (2002). Evaluation of student satisfaction:
Determining the impact of a web-based environment by controlling for student
characteristics. The American Journal of Distance Education, 16(3), 169–189.
Tibebu, D., Bandyopadhyay, T., & Negash, S. (2009). ICT Integration Efforts in Higher
Education in Developing Economies: The Case of Addis Ababa University, Ethiopia.
Int‟l J. of Information and Communication Technology Education, 5, 34-58.
Torkzadeh, G., and Doll, W.J. (1999). The development of a tool for measuring the perceived
impact of information technology on work Omega, 27(3), 327–339.
Torrington, D., Hall, L., Haylor, I. and Myers, J. (1991), Employee Resourcing, Institute of
Personnel Management, London.
Tossy, T. (2017). Cultivating Recognition: A Classic Grounding Theory of E-learning Providers
Working in East Africa (A PhD thesis). University of Cape Town (UCT).
Tossy, T and Brown, I. (2017). Collaborative Partnerships: a project-based legitimizing Strategy
amongst East African E-learning Providers. IGI Global Publishing.

253| P a g e
Tossy, S., Sife, A.S., and Msanjila, S.S. (2017). An integrated model for measuring the impacts
of e-learning on students‘ achievement in developing countries. International Journal of
Education and Development using Information and Communication Technology
(IJEDICT), 13(3), 109-127.
Tyechia V. P. (2014). An Evaluation of the Effectiveness of E-Learning, Mobile Learning, and
Instructor-Led Training in Organizational Training and Development. The Journal of
Human Resource and Adult Learning, 10 (2), 1-13.
UNDP. (2006). Online distance learning initiative fact sheet. Accessed at
http://www.reliefweb.int/library/documents.
Unwin, T., Kleessen, B., Hollow, D., James, B., Oloo, L.M. and Alwala, J. (2010). Digital
learning management systems in Africa: Myths and realities. Open Learning: The
Journal of Open Distance and e-Learning, 25 (1), 5–23.
Unwin,T. (2008). Survey of eLearning in Africa Based on a Questionnaire Survey of People on
the eLearning Africa Database in 2007.
Uppal, M.A, Ali, S., and Gullivar, S.R. (2017). Factors determining e-learning service quality.
British Journal of Educational Technology.
Urbach, N., & Ahlemann, F. (2010). Structural equation modeling in information systems
research using partial least squares. Journal of Information Technology Theory and
Application, 11(2), 5-40.
Urbach, N. and Müller, B. (2012). The updated DeLone and McLean model of information
systems success, in Dwivedi, Y.K., Wade, M.R. and Schneberger, S.L. (Eds.),
Information systems theory: Explaining and predicting our digital society, Springer, New
York, Vol. 28, pp. 1–18.
Van, D. T.P, Kappelman, L.A., and Prybutok, V.R. (1997). Measuring information systems
service quality: concerns on the use of the SERVQUAL questionnaire. MIS Quarterly,
21(2), 195–208.
Van Maanen, J. (1988) Tales of the field: On writing ethnography. Chicago: University of
Chicago Press.
Venkatesh, V. (2001). Determinants of perceived ease of use: integrating control, intrinsic
motivation, and emotion into the technology acceptance model. Information Systems
Research, 11(4), 342–365.

254| P a g e
Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance
model: four longitudinal field studies. Management Science, 46(2), 186–204.
Venkatesh, V., and Morris, M.G. (2000). Why don‘t men ever stop to ask for directions? Gender,
social influence, and their role in technology acceptance and usage behavior. MIS
Quarterly, 24(1), 115–149.
Venkatesh, V., Morris, M., Davis, G. and Davis, F. (2003). User acceptance of information
technology: toward a unified view. MIS quarterly, 27 (3), 425–478.
Venter, P., Rensburg, M.J. Van and Davis, A. (2012). Drivers of learning management system
use in a South African open and distance learning institution. Australasian Journal of
Educational Technology, 28 (2), 183–198.
Vlahos, G.E., and Ferratt, T.W. (1995). Information technology use by managers in Greece to
support decision making: amount, perceived value, and satisfaction. Information &
Management, 29(6), 305–315.
Vlahos, G.E, Ferratt, T.W., and Knoepfle, G. (2004). The use of computer-based information
systems by German managers to support decision making. Information & Management,
41(6), 763–779.
Wagner, N., Hassanein, K., & Head, M. (2008). Who is responsible for E-Learning Success in
Higher Education? A Stakeholders' Analysis. Educational Technology & Society, 11 (3),
26-36.
Walsham, G. (2006). Doing interpretive research. European Journal of Information systems.
5(3), 320-330.
Wambui, L. and Black, E. (2008). Factors that impede adoption of e-learning in developing
countries: Advice for moving beyond challenges with integrating e-learning platforms at
Polytechnic of Namibia.JI, pp. 1–7.
Wang, H. and Chiu, Y. (2011). Assessing e-learning 2.0 system success. Computers &
Education, 57, 1790–1800.
Wang,Y.S., and Liao,Y.W. (2008). Assessing e-Government systems success: A validation of
the DeLone & McLean model of information systems success. Government Information
Quarterly, 25(4), 717-733.

255| P a g e
Wang, Y.-S., Wang, H.-Y., & Shee, D. Y. (2007). Measuring e-learning systems success in an
organizational context: Scale development and validation. Computers in Human
Behavior,23(4), 1792-1808.
Webometrics. (2016). Retrieved from Webometrics: According to
http://www.webometrics.info/en/Africa/Ethiopia.
Webster, J., and Martocchio, J. J. (1992). Microcomputer Playfulness: Development of a
Measure with Workplace Implications. MIS Quarterly, 16(2), 201-226.
Weill, P., and Vitale, M. (1999). Assessing the health of an information systems portfolio: an
example from process engineering. MIS Quarterly, 23(4), 601–624.
Wilkinson, S. (2004), Focus group research. In: Silverman, D. (Ed.), Qualitative Research.
Theory, Method and Practice, second ed. Sage Publications, Thousand Oaks, CA.
Williams, C.S. and Saunders, M.N.K. (2006). Developing the Service Template: From
measurement to agendas for improvement. Service Industries Journal, 26 (5), 1–15.
Williamson, K. (2000). Research methods for students and professionals: Information
management and systems. New South Wales: Centre for Information Studies, Charles
Sturt University.
Wilson, B. G. (1997). Reflections on constructivism and instructional design. In C. R. Dills & A.
A. Romiszowski (Eds.), Instructional Development Paradigms (pp. 63–80).
Winter, S.J, Chudoba, K.M., and Gutek, B.A. (1998). Attitudes toward computers: when do they
predict computer use?. Information & Management, 34(5), 275–284.
Wixom, B.H., and Todd, P.A. (2005). A theoretical integration of user satisfaction and
technology acceptance. Information Systems Research, 16(1), 85–102.
Wolfinbarger, M., Gilly, M. C. & Schau, H. J. (2005).Keeping up with the times: Innovation and
usage of the Internet among later adopters.CRITO Consortium Project.
World Bank Report. (2018). A world bank Country Study. USA: Washington D.C.
Wrzesien, M., & Raya, M. A. (2010). Learning in serious virtual worlds: Evaluation of learning
effectiveness and appeal to students in the EJunior project. Computers & Education, 55,
178-187.
Wu, J.H. Tennyson, R.D., and Hsia, T.L. (2010). A study of student satisfaction in a blended e-
learning system environment. Computers & Education, 55 (1),155-164.

256| P a g e
Wu, J.H, and Wang, Y.M. (2006). Measuring KMS success: a re-specification of the DeLone
and McLean model. Information & Management, 43(6), 728–739.
Wu, J-H., and Wang, Y-M. (2006). Measuring KMS success: a re-specification of the DeLone
and McLean model. Information & Management, 43(6), 728–739.
Wu, B., & Zhang, C. (2014). Empirical study on continuance intentions towards E-Learning 2.0
systems. Behaviour & Information Technology, 33(10), 1027-1038.
Yakubu, M. N., & Dasuki, S. I. (2018). Assessing eLearning systems success in Nigeria: An
application of the DeLone and McLean information systems success model. Journal of
Information Technology Education Research,17, 182-202.
Yakubu, Nasir, Dasuki, and Salihu (2018). Measuring e-learning success in developing
countries: applying the updated DeLone and McLean model. Journal of Information
Technology Education Research, 157-172.
Yang, H.D., and Yoo, Y. (2004). It‘s all about attitude: revisiting the technology acceptance
model. Decision Support Systems, 38(1), 19–31.
Yilmaz, R. (2017). Exploring the role of e-learning readiness on student satisfaction and
motivation in flipped classroom, 70, 251-260.
Yin, R.K. (2011). Qualitative Research from Start to Finish. The Guilford Press: New York.
Yin, R.K. (2009). Case Study Research: Design and Methods. Essential guide to qualitative
methods in organizational research, (4th ed.). Sage Publications, Thousand Oaks.
Yin, R. K. (2003). Case Study Research: Design and Methods. (2 nd ed.). Sage Publications,
Newbury Park
Yin, R. K. (1994). Case Study Research: Design and Methods, (2 nd ed.). Sage Publications,
Thousand Oaks.
Yoon, Y., and Guimaraes, T. (1995). Assessing expert systems impact on users‘ jobs. Journal of
Management Information Systems, 12(1), 225–249.
Yuthas, K., and Young, S.T. (1998). Material matters: assessing the effectiveness of materials
management IS. Information & Management, 33(3), 115–124.
Zanjani, N., Nykvist, S. S., & Geva, S. (2012). Do students and lecturers actively use
collaboration tools in learning management systems? In G. Biswas et al. (Eds.), ICCE
2012. Paper presented at the Proceedings of 20th International Conference on
Computers in Education (ICCE 2012), Singapore (pp. 698-705)

257| P a g e
Zeithaml,V.A., Parasuraman, A., & Berry, L.L. (1990). Delivering Quality Service: Balancing
Customer Perceptions and Expectations. New York: Free Press.
Zhang, D., & Nunamaker, J. F. (2003). Powering e-learning in the new millennium: An overview
of e-learning and enabling technology. Information Systems Frontiers, 5(2), 207-218.
Zheng, D, Zhao, J. L., Zhao, L and Nomamaker, J. D (2004). Can e-learning replace classroom
learning? Communications of the ACM, 47(5), 74-79.
Zhu, K., and Kraemer, K.L. (2005). Post-adoption variations in usage and value of e-business by
organizations: cross-country evidence from the retail industry. Information Systems
Research, 16(1), 61–84.
Zikmund, W.G. (2000). Business Research Methods (6th edn). Fort Worth, TX: Dryden Press.

258| P a g e
Appendices

Appendix 1. In-depth- Personal Interview Protocol for ICT directors

1. Is e-learning system in use in the teaching and learning process in your University? If yes,

what kind of Learning Management System is in use?

2. What is the current percentage of courses at your University offered through e-learning (fully

online or blended)?

3. What factors determine the usefulness of the e-learning systems to students and instructors?

4. What factors determine e-learning systems user‘s (all e-learning stakeholders‘) satisfaction?

5. What factors drive the continually use e-learning systems in the University?

6. Overall, what factors determine the success of e-learning systems?

259| P a g e
Appendix 2a. Focus Group Discussion Questions: For Instructors’ Focus Group

1. What are the challenges/factors of e-learning systems that instructors face while using it

for the teaching and learning process and how to overcome them (in terms of success

factors)?

2. What factors determine the continual usage of e-learning systems for instructors?

3. What factors determine e-learning systems user‘s (instructors‘) satisfaction?

4. Overall, what are the factors that determine the success of e-learning systems?

260| P a g e
Appendix2b: Focus Group Discussion Questions: For Students’ Focus Group

1. What are the challenges/factors of e-learning systems that students face for the success of

e-learning systems in the teaching and learning process and how to overcome them (in

terms of success factors)?

2. What are the factors that drive students to sustainably use/apply e-learning systems?

3. What are the factors that determine e-learning systems students‘ satisfaction?

4. Overall, what are the factors that determine the success of e-learning systems in the

teaching and learning process?

261| P a g e
Appendix 2c: Focus Group Discussion Questions: For E-learning Experts Focus Group

1. What are the challenges/factors of e-learning systems in supporting the teaching and

learning process and how to overcome them (in terms of success factors)?

2. What are the factors that drive Students and Instructors to sustainably use/apply e-

learning systems?

3. What are the factors that determine e-learning systems user‘s (instructors and students)

satisfaction?

4. Overall, what are the factors that determine the success of e-learning systems?

262| P a g e
Appendix 3. Document Checklist

1. There is an E-learning policy?

2. Computer and E-learning training schedule?

3. There is an e-learning Quality assurance /evaluation document?

4. Organizational structure with regard to the e-learning unit?

5. Annual e-learning reports?

263| P a g e
Appendix 4. Observation Checklist

1. There are good internet access facilities for every students and instructors?

2. There is a dedicated e-learning office?

3. All required e-learning infrastructures (dedicated e-learning computer labs,

studios to record lecture materials and so on) are available and ready to be used?

4. Active participation of members of the focus groups (e-learning experts,

instructors and students)?

264| P a g e
Appendix 5: Details of Qualitative Study Participants (Focus Group Participants, In-Depth
Personal Interviews)

Type of Qualitative Total Numbers of Participants / Sampling Units


Data *Focus Group
Collection Labels Total number of participants in each focus group of the Institutions under study Total
Methods
and in-depth personal interviews numbers of
sampling
units

PESC AMU ASTU ECSU HRU AAU MU BDU JU

SFG-I 7 6 7 5 5 - 5 - - 35

SFG-II 5 5 5 4 7 - 4 - - 30
1. Students Focus
Groups (SFG) SFG-III 6 - 6 5 6 - - - - 23

SFG-IV 8 - 4 6 - - - - - 18

2. Instructors ISFG-I 6 5 5 3 5 5 5 - 3 37
Focus Groups
(ISFG) ISFG-II 5 4 5 4 4 - 4 - - 26
ISFG-III 7 3 7 3 4 - 3 - - 27
EEFG-I 5 4 4 5 3 4 4 4 3 36

3. E-learning EEFG-II 3 3 3 3 4 4 3 - - 23
Experts Focus
Groups (EEFG) EEFG-III 6 3 - - - - - - - 9

4. In-depth personal interviews with 1 1 1 1 1 1 1 1 1 9


ICT Directors

Total numbers of sampling units from all focus group 273


discussion participants and in-depth personal interviews

Keys:* Focus Group Labels: it identifies/codes the consecutive numbers (represented by roman
numbers) of focus group discussions conducted under each institution. Accordingly, SFG-I represents to
the first focus group discussions, SFG-II represents to the second focus group students discussion and
SFG-III represents to the third focus group students discussion. Likewise, ISFG and EEFG represent to
focus group instructors and e-learning experts/ICT experts, respectively. Roman numbers attached to each
type of focus group discussions are represented to identify (labels) each focus group discussions.

265| P a g e
Appendix 6a: Codes Assigned To Each of the Focus Group Discussion Participants

Students’ Focus Group (SFG) Instructors’


E-learning Experts Focus Groups (EEFG) Participants
Ethiopian Discussion Participants Focus Group (ISFG) Discussion Participants
Higher Focus Focus Focus
Education Group Participant Code Group Participant Code Group Participant Code
Institutions Labels Labels Labels
AAU - - AAU-ISFG-I AAU-ISFG-I-1, AAU-ISFG-I-2, AAU-ISFG-I-3, AAU-ISFG-I-4, AAU-ISFG-I-5 AAU-EEFG-I AAU-EEFG-I-1, AAU-EEFG-I-2, AAU-EEFG-I-3, AAU-EEFG-I-4
- - - - AAU-EEFG-II AAU-EEFG-II-1, AAU-EEFG-II-2, AAU-EEFG-II-3, AAU-EEFG-II-4
AMU AMU-SFG-I AMU-SFG-I-1, AMU-SFG-I-2, AMU-SFGI3, AMU-
AMU-ISFG-I AMU-ISFG-I-1, AMU-ISFG-I-2, AMU-ISFG-I-3, AMU-ISFG-I-4, AMU-ISFG-I-5 AMU-EEFG-I AMU-EEFG-I-1, AMU-EEFG-I-2, AMU-EEFG-I-3, AMU-EEFG-I-4
SFG-I-4, AMU-SFG-I-5, AMU-SFG-I-6
AMU-SFG-II AMU-SFG-II-1, AMU-SFG-II-2, AMU-SFG-II-3, AMU-
AMU-ISFG-II AMU-ISFG-II-1, AMU-ISFG-II-1,AMU-ISFG-II-3, AMU-ISFG-II-4 AMU-EEFG-II-I AMU-EEFG-II-1, AMU-EEFG-II-2, AMU-EEFG-II-3
SFG-II-4, AMU-SFG-II-5
- - AMU-ISFG-III AMU-ISFG-III-1, AMU-ISFG-III-2, AMU-ISFG-III-3 AMU-EEFG-III AMU-EEFG-III-1, AMU-EEFG-III-2, AMU-EEFG-III-3
ASTU ASTU-SFG-I ASTU-SFG-I-1, ASTU-SFG-I-2, ASTU-SFG-I-3, ASTU-
SFG-I-4, ASTU-SFG-I-5, ASTU-SFG-I-6, ASTU-SFG-I- ASTU-ISFG-I ASTU-ISFG-I-1, ASTU-ISFG-I-2, ASTU-ISFG-I-3, ASTU-ISFG-I-4, ASTU-ISFG-I-5 ASTU-EEFG-I ASTU-EEFG-I-1, ASTU-EEFG-I-2, ASTU-EEFG-I-3, ASTU-EEFG-I-4
7
ASTU-SFG-II ASTU-SFG-II-1, ASTU-SFG-II-2, ASTU-SFG-II-3, ASTU-ISFG-II-1, ASTU-ISFG-II-2, ASTU-ISFG-II-3, ASTU-ISFG-II-4, ASTU-ISFG-
ASTU-ISFG-II ASTU-EEFG-II ASTU-EEFG-II-1, ASTU-EEFG-II-2, ASTU-EEFG-II-3
ASTU-SFG-II-4, ASTU-SFG-II-5 II-5,
ASTU-SFG-III ASTU-SFG-III-1, ASTU-SFG-III-2, ASTU-SFG-III-3, ASTU-ISFG-III-1, ASTU-ISFG-III-2, ASTU-ISFG-III-3, ASTU-ISFG-III-4, ASTU-
ASTU-ISFG-III - -
ASTU-SFG-III-4, ASTU-SFG-III-5, ASTU-SFG-III-6 ISFG-III-5, ASTU-ISFG-III-6, ASTU-ISFG-III-7,
ASTU-SFG-III ASTU-SFG-III-1, ASTU-SFG-III-2, ASTU-SFG-III-3,
- - - -
ASTU-SFG-III-4
BDU - - - - BDU-EEFG-I BDU-EEFG-I-1, BDU-EEFG-I-2, BDU-EEFG-I-3, BDU-EEFG-I-4
ECSU ECSU-SFG-I ECSU-SFG-I-1, ECSU-SFG-I-2, ECSU-SFG-I-3, ECSU- ECSU-EEFG-I-1, ECSU-EEFG-I-2, ECSU-EEFG-I-3, ECSU-EEFG-I-4,
ECSU-ISFG-I ECSU-ISFG-I-1, ECSU-ISFG-I-2, ECSU-ISFG-I-3 ECSU-EEFG-I
SFG-I-4, ECSU-SFG-I-5, ECSU-EEFG-I-5
ECSU-SFG-II ECSU-SFG-II-1, ECSU-SFG-II-2, ECSU-SFG-II-3,
ECSU-ISFG-II ECSU-ISFG-II-1, ECSU-ISFG-II-2, ECSU-ISFG-II-3, ECSU-ISFG-II-4 ECSU-EEFG-II ECSU-EEFG-II-1, ECSU-EEFG-II-2, ECSU-EEFG-II-3
ECSU-SFG-II-4
ECSU-SFG-III ECSU-SFG-III-1, ECSU-SFG-III-2, ECSU-SFG-III-3,
ECSU-ISFG-III ECSU-ISFG-III-1, ECSU-ISFG-III-2, ECSU-ISFG-III-3 - -
ECSU-SFG-III-4, ECSU-SFG-III-5
ECSU-SFG-IV ECSU-SFG-IV-1, ECSU-SFG-IV-2, ECSU-SFG-IV-3,
- - - -
ECSU-SFG-IV-4, ECSU-SFG-IV-5, ECSU-SFG-IV-6
HRU HRU-SFG-I HRU-SFG-I-1, HRU-SFG-I-2, HRU-SFG-I-3, HRU-SFG-
HRU-ISFG-I HRU-ISFG-I-1, HRU-ISFG-I-2, HRU-ISFG-I-3, HRU-ISFG-I-4, HRU-ISFG-I-5 HRU-EEFG-I HRU-EEFG-I-1, HRU-EEFG-I-2, HRU-EEFG-I-3
I-4, HRU-SFG-I-5
HRU-SFG-II HRU-SFG-II-1, HRU-SFG-II-2, HRU-SFG-II-3, HRU-
HRU-ISFG-II HRU-ISFG-II-1, HRU-ISFG-II-2, HRU-ISFG-II-3, HRU-ISFG-II-4 HRU-EEFG-II HRU-EEFG-II-1, HRU-EEFG-II-2, HRU-EEFG-II-3, HRU-EEFG-II-4
SFG-II-4, HRU-SFG-II-5, HRU-SFG-II-6, HRU-SFG-II-7
HRU-SFG-III HRU-SFG-III-1, HRU-SFG-III-2, HRU-SFG-III-3, HRU-
HRU-ISFG-III HRU-ISFG-III-1, HRU-ISFG-III-2, HRU-ISFG-III-3, HRU-ISFG-III-4 - -
SFG-III-4, HRU-SFG-III-5, HRU-SFG-III-6
JU - - JU-ISFG-I JU-ISFG-I-1, JU-ISFG-I-2, JU-ISFG-I-3 JU-EEFG-I JU-EEFG-I-1, JU-EEFG-I-2, JU-EEFG-I-3
MU MU-SFG-I MU-SFG-I-1, MU-SFG-I-2, MU-SFG-I-3, MU-SFG-I-4,
MU-ISFG-I MU-ISFG-I-1, MU-ISFG-I-2, MU-ISFG-I-3, MU-ISFG-I-4, MU-ISFG-I-5 MU-EEFG-I MU-EEFG-I-1, MU-EEFG-I-2, MU-EEFG-I-3, MU-EEFG-I-4
MU-SFG-I-5
MU-SFG-II MU-SFG-II-1, MU-SFG-II-2, MU-SFG-II-3, MU-SFG-II-
MU-ISFG-II MU-ISFG-II-1, MU-ISFG-II-2, MU-ISFG-II-3, MU-ISFG-II-4 MU-EEFG-II MU-EEFG-II-1, MU-EEFG-II-2, MU-EEFG-II-3
4
- - MU-ISFG-III MU-ISFG-III-1, MU-ISFG-III-2, MU-ISFG-III-3 - -
PESC PESC-SFG-I PESC-SFG-I-1, PESC-SFG-I-2, PESC-SFG-I-3, PESC- PESC-ISFG-I-1, PESC-ISFG-I-2, PESC-ISFG-I-3, PESC-ISFG-I-4, PESC-ISFG-I-5, PESC-EEFG-I-1, PESC-EEFG-I-2, PESC-EEFG-I-3, PESC-EEFG-I-4,
PESC-ISFG-I PESC-EEFG-I
SFG-I-4, PESC-SFG-I-5, PESC-SFG-I-6, PESC-SFG-I-7 PESC-ISFG-I-6 PESC-EEFG-I-5
PESC-SFG-II PESC-SFG-II-1, PESC-SFG-II-2, PESC-SFG-II-3, PESC-
PESC-ISFG-II PESC-ISFG-II-1, PESC-ISFG-II-2, PESC-ISFG-II-3, PESC-ISFG-II-4, PESC-ISFG-II-5 PESC-EEFG-II PESC-EEFG-II-1, PESC-EEFG-II-2, PESC-EEFG-II-3
SFG-II-4, PESC-SFG-II-5
PESC-SFG-III PESC-SFG-III-1, PESC-SFG-III-2, PESC-SFG-III-3, PESC-ISFG-III-1, PESC-ISFG-III-2, PESC-ISFG-III-3, PESC-ISFG-III-4, PESC-ISFG- PESC-EEFG-III-1, PESC-EEFG-III-2, PESC-EEFG-III-3, PESC-EEFG-
PESC-ISFG-III PESC-EEFG-III
PESC-SFG-III-4, PESC-SFG-III-5, PESC-SFG-III-6 III-5, PESC-ISFG-III-6, PESC-ISFG-III-7 III-4, PESC-EEFG-III-5, PESC-EEFG-III-6
PESC-SFG-IV PESC-SFG-IV-1, PESC-SFG-IV-2, PESC-SFG-IV-3, - -
PESC-SFG-IV-4, PESC-SFG-IV-5, PESC-SFG-IV-6, - -
PESC-SFG-IV-7, PESC-SFG-IV-8
*Participant code represented as a combination of the institution under which the study participant belongs to, the type of focus group, focus groups labels
represented by roman numbers and numbers represented to the specific participant under each type of focus group discussion sessions.

266| P a g e
Appendix 6b: Codes Assigned To In-Depth Personal Interviews Made with ICT Directors,
and Archival Evidences (Documents)

Ethiopian Codes assigned to Codes assigned to archival evidences/documents


higher in-depth personal Annual reports (*ADC) E-learning unit reports (*EDC)
education
interviews made
institutions
with ICT directors
(*ICTD)
AMU AMU-ICTD AMU-ADC1, AMU-ADC2, AMU-ADC3 AMU-EDC1, AMU-EDC2, AMU-EDC3
ASTU ASTU-ICTD ASTU-ADC1, ASTU-ADC2, ASTU- ASTU-EDC1, ASTU-EDC2, ASTU-
ADC3 EDC3
ECSU ECSU-ICTD ECSU-ADC1, ECSU-ADC2, ECSU- ECSU-EDC1, ECSU-EDC2, ECSU-
ADC3 EDC3
MU MU-ICTD MU-ADC1, MU-ADC2, MU-ADC3 MU-EDC1, MU-EDC2, MU-EDC3
HRU HRU-ICTD HRU- ADC1, HRU- ADC2, HRU- ADC3 HRU- EDC1, HRU- EDC2, HRU- EDC3
PESC PESC-ICTD PESC- ADC1, PESC- ADC2, PESC- PESC- EDC1, PESC- EDC2, PESC-
ADC3 EDC3
AAU AAU-ICTD AAU-ADC1, AAU-ADC2, AAU-ADC3 AAU-EDC1, AAU-EDC2, AAU-EDC3
JU JU-ICTD JU-ADC1, JU-ADC2, JU-ADC3 JU-EDC1, JU-EDC2, JU-EDC3
BDU BDU-ICTD BDU-ADC1, BDU-ADC2, BDU-ADC3 BDU-EDC1, BDU-EDC2, BDU-EDC3

*ICTD- Information Communication Technology Director


*ADC- represents Annual Document. And the number attached to each ADC represents the different types of
available annual documents
*EDC – represents E-learning Document. And the number attached to each EDC represents the different types of
available documents or reports produced by the e-learning unit of each institution under the study

267| P a g e
Appendix 7a: The Model Constructs, Items, and Item Codes: For Students’ Questionnaire

Dimension 1: Learners‘ E-learning Factor (LEF)

Construct
*Codes Items Sources
Measures/Aspects
Bhuasiri et al. (2012), Lee (2010), Sun et al. (2007),
I believe that working with computers Arbaugh (2002), Arbaugh & Duray (2002), Hong,
LEF1 Computer self-efficacy
is easy (2002), Piccoli et al. (2001), and self-development
from qualitative empirical data findings.
Bhuasiri et al. (2012), Sun et al. (2007), Arbaugh,
I believe that working with internet is Computer and internet (2002), Arbaugh & Duray (2002), Hong (2002),
LEF2
easy self-efficacy Piccoli et al. (2001), Joo et al. (2000), and self-
development from empirical data findings
Ozkan & Koseler (2009), Arbaugh (2002), Arbaugh
& Duray (2002), Hong (2002), Thurmond et al.
I believe that through e-learning I can Attitude towards e-
LEF3 (2002), Arbaugh (2002), Arbaugh & Duray (2002),
learn anytime, anywhere. learning
Hong (2002), Piccoli et al. (2001), and self-
development from qualitative empirical data findings.
Bhuasiri et al. (2012), Arbaugh (2002), Arbaugh &
I believe that e- learning is suitable Attitude towards e- Duray (2002), Hong (2002), Piccoli et al. (2001), and
LEF4
for my learning style learning self-development from qualitative empirical data
findings.
I believe that e-learning makes the Arbaugh (2002); Arbaugh & Duray (2002), Hong
Attitude towards e-
LEF5 communication easier with my (2002), Piccoli et al. (2001), and self-development
learning
instructor and other class mates from qualitative empirical data findings.
Over all, I believe that use of e- Arbaugh (2002), Arbaugh & Duray (2002), Hong
Attitude towards e-
LEF6 learning for my learning provides an (2002), Piccoli et al. (2001), and self-development
learning
interesting experience from qualitative empirical data findings.
*Codes: represents code assigned to each of the construct items. For example: ―LEF‖ stands for Learners‟ E-
learning Factor and the number attached to LEF stands for the consecutive items numbers such as LEF1 stands for
item number 1 representing the first construct item.

Dimension 2: Instructor‘s E-learning Factor (IEF)

Construct Measures/
* Codes Items Sources
Aspects
The instructor created an online Bhuasiri et al. (2012); Lee, 2010; Sun et al. (2007);
Creating an e-learning
IEF1 environment conducive and enjoyable Selim (2007); Khan (2005) and self-development
environment
for learning via e-learning from the empirical data/qualitative phase.
The instructor responds promptly to Lwoga (2014), Khan (2005); Selim (2007);
IEF2 questions and concerns via e-learning Timely response Thurmond et al. (2002) and self-development from
systems of the University the empirical data/qualitative phase.
Bhuasiri et al. (2012), Arbaugh (2000),Hong (2002),
I find it easy to communicate with the
IEF3 Computer self-efficacy and self-development from qualitative empirical data
instructor via e-learning
findings.
The instructor encourages me to Cheng (2012), Selim (2007), Khan (2005), Hong
Creating an e-learning
IEF4 interact with other students by using (2002), and self-development from qualitative
environment
e-learning interactive tools empirical data findings.
Lwoga (2014), McGill et al. (2011); Khan (2005), and
Over all, the instructor encourages me Creating an e-learning
IEF5 self-development from qualitative empirical data
to learn via e-learning environment
findings.
*Codes: represents code assigned to each of the construct items. For example: ―IEF” stands for Instructors‟ E-
learning Factor and the number attached to LEF stands for the consecutive items numbers such as LEF1 stands for
item number 1 representing the first construct item.

268| P a g e
Dimension 3: E-learning Content Quality (EQ)

*Items Items Construct Measures/ Sources


Codes Aspects
Lwoga (2014), Bhuasiri et al. (2012), Lee (2010),
Lecture notes are supported by
Ozkan & Koseler (2009),Sun et al. (2007), Arbaugh
EQ1 multimedia tools (flash animations, Course quality
(2000;2002), and self-development from qualitative
simulations, videos, audios, etc.).
empirical data findings.
Bhuasiri et al. (2012), Lee (2010), Ozkan & Koseler
Abstract concepts (principles, formulas,
(2009), Sun et al. (2007), Panda and Mishra (2007),
EQ2 rules, etc.) are clearly illustrated and Course quality
and self-development from qualitative empirical data
easy to understand
findings.
Lwoga (2014), Lee (2010), Sun et al. (2007), Panda
The course contents are well-structured
and Mishra (2007), Lin (2008), Khan (2005) , and
EQ3 and complete to meet the course Course quality
self-development from qualitative empirical data
objectives.
findings.
I found the course content on the e- Mohammadi (2015), Bhuasiri et al. (2012), Lee
EQ4 learning system to be useful and Relevant content (2010), Sun et al. (2007), and self-development from
relevant qualitative empirical data findings.
Cheng (2012), Bhuasiri et al. (2012), Lee (2010),
Supporting materials, web-links and Ozkan & Koseler (2009), Sun et al. (2007), Panda and
EQ5 Up-to-date
illustrations are up-to-date Mishra (2007), and self-development from qualitative
empirical data findings.
*Codes: represents code assigned to each of the construct items. For example: ―EQ‖ stands for E-learning Content
Quality and the number attached to EQ stands for the consecutive items numbers such as EQ1 stands for item
number 1 representing the first construct item. .

Dimension 4: E-learning Systems Quality (ESQ)

Construct
* Codes Items *Items Codes
Measures/ Aspects
Lwoga (2014), Bhuasiri et al. (2012), Lin (2007),
The interface of the e-learning system
ESQ1 Ease of use DeLone & McLane (2003), and self-development
is attractive to easily use
from qualitative empirical data findings.
The e-learning system enables Cheng (2012), Wu et al (2010); Wang et al., (2007),
interactive communication between Arbaugh (2000; 2002), Arbaugh & Duray (2002),
ESQ2 Systems interactivity
me and the instructor and among Hong (2002), and self-development from qualitative
other students empirical data findings.
The e-learning system provides
Ozkan & Koseler (2009), Holsppale & lee-Post
security features (e.g., use of
ESQ3 Secure (2006), DeLone & McLane (2003), and self-
password) to protect my personal
development from qualitative empirical data findings.
notes and course materials
Bhuasiri et al. (2012), Sun et al. (2007), DeLone &
The e-learning systems is available
ESQ4 Systems availability McLane (2003), and self-development from
all the time
qualitative empirical data findings.
The e-learning system is both easy Lwoga (2014), Freeze et al. (2010), Wang et al.,
Ease of use and
ESQ5 and user-friendly to use for my (2007), DeLone & McLane (2003), and self-
friendly
learning needs. development from qualitative empirical data findings.
The e-learning system has many Mohammadi (2015), Bhuasiri et al. (2012), Sun et al.
Systems useful
useful functions/components for my (2007), DeLone & McLane (2003), Thurmond et al.
ESQ6 features and
learning needs and is available all the (2002), and self-development from qualitative
availability
time empirical data findings.
Lwoga (2014), Bhuasiri et al. (2012),Lin (2007),
The response time of the e-learning Systems
ESQ7 Khan (2005), DeLone & McLane (2003), and self-
system is reasonable. responsiveness
development from qualitative empirical data findings.
*Codes: represents code assigned to each of the construct items. For example: ―ESQ‖ stands for E-learning Systems
Quality and the number attached to ESQ stands for the consecutive items numbers such as ESQ1 stands for item
number 1 representing the first construct item.

269| P a g e
Dimension 5: Institutional Support Quality (INSPQ)

*Codes Items Construct Sources


Measures/Aspects

Panda and Mishra (2007), Selim


The University has a clear guidelines in order E-learning policy and (2007), Khan (2005), and self-
INSPQ1
to use e-learning to enhance my learning needs guidelines development from qualitative
empirical data findings.
e-learning infrastructure (computer hardware
Gunn (2010), Selim (2007), Khan
and software, electric supply and other Availability of e-learning
INSPQ2 (2005), and self-development from
relevant infrastructure) are readily available infrastructure
qualitative empirical data findings.
for use of e-learning systems
Maarshal (2012), Ozkan and
The University provides adequate and Koseler (2009), Panda and Mishra
INSPQ3 consistent training in computer and Internet Computer training (2007), Selim (2007), Khan (2005),
technologies for use of e-learning features and self-development from
qualitative empirical data findings.
The University collects regular feedback from
Quality assurance Khan (2005), self-development
INSPQ4 students to control the quality of e-learning
mechanism from qualitative empirical data
systems in the teaching and learning process
Bhuasiri et al. (2012), Ozkan and
The Internet quality is sufficient to support e- Availability of e-learning Koseler (2009), Panda and Mishra
INSPQ5
learning. infrastructure (2007), Sun et al. (2007), Self-
development from empirical data
Balaban et al. (2013), Gunn (2011),
I get all the guidance and prompt support I Bell and Bell (2005), Panda and
INSPQ6 need from the technical staff of the University Technical support Mishra (2007), Khan (2005), and
while using e-learning self-development from qualitative
empirical data
Marshal (2012), Demirkan et al.,
Overall, institutional support related to the e- (2010), Gunn (2010), Khan (2005),
INSPQ7 Institutional support
learning system is satisfactory and self-development from
qualitative empirical data findings.
*Codes: represents code assigned to each of the construct items. For example: ―INSPQ‖ stands for Institutional
Support Quality and the number attached to INSPQ stands for the consecutive items numbers such as INSPQ1
stands for item number 1 representing the first construct item.

Dimension 6: Systems Usage (SU)

*Codes Items Construct Measures/ Sources


Aspects
DeLone & Mclean (2003), McGill
et al.(2003), Hobbs and Klobas,
SU1 I frequently use e-learning for my learning Frequency of use
(2003), and self-development from
qualitative empirical data findings.
Freeze et al. (2010), Wang et al.,
The e-learning system is reliable and
SU2 Dependable (2007), and self-development from
dependable for my learning
qualitative empirical data findings.
Lwoga (2014), Freeze (2010),
I use e-learning whenever it is necessary for Wang et al., (2007), and self-
SU3 Frequency of use
my learning development from qualitative
empirical data findings.
*Codes: represents code assigned to each of the construct items. For example: ―SU‖ stands for Systems Usage and
the number attached to SU stands for the consecutive items numbers such as SU1 stands for item number 1
representing the first construct item.

270| P a g e
Dimension 7: User‘s Satisfaction (US)

*Codes Items Construct Measures/ Sources


Aspects
I am satisfied with the performance of e- User satisfaction Lin (2007), Wang et al., (2007), Lin
learning (2007), Arbaugh (2000), Holsapple
and Lee Post (2006), DeLone and
US1
Mclean (2003), and self-development
from qualitative empirical data
findings.
Use of e-learning for my learning is an Enjoyable experience Lin (2007), Wang et al., (2007),
enjoyable experience. Arbaugh (2000), and self-
US2
development from qualitative
empirical data findings.
In general, I am satisfied with University‘s Overall, user satisfaction Lwoga (2014), Freeze et al (2010),
e-learning system. Lin (2007),Wang et al. (2007),
US3 Holsapple and Lee Post (2006), and
self-development from qualitative
empirical data findings.
*Codes: represents code assigned to each of the construct items. For example: ―US‖ stands for User satisfaction and
the number attached to US stands for the consecutive items numbers such as US1 stands for item number 1
representing the first construct item.

Dimension 8: E-learning Systems Outcome (ESO)

* Codes Items Construct Measures/ Sources


Aspects
e-learning helps me to learn the course ,
complete and submit my assignments in Freeze et al (2010), Ozkan & Koseler (2009),
less time Wang et al., (2007), Holsapple and Lee Post
ESO1 Time saving (2006), Zhang and Nunamaker (2003), and
self-development from qualitative empirical
data findings.

E-learning saves my travel costs to the Larsen et .al (2009), Ozkan & Koseler (2009),
University and costs of printing course McPherson and Nunes (2008), Wang et al.,
materials (2007), Roca et al. (2006), and self-
ESO2 Cost saving
development from qualitative empirical data
findings.

Learning via e-learning systems enhances Eom (2011), Freeze et al. (2010), Wang et al.,
my understanding with my courses in less (2007),Wang et al. (2007), and self-
ESO3 Time saving development from qualitative empirical data
time
findings.
Overall, use of e-learning enhances my Cheng (2012), Eom (2011), Freeze et al.
academic performance (2010), Wang et al. (2007), Holsapple & lee-
Enhanced academic
ESO4 Post 92006), McGill and Klobas (2008), and
performance self-development from qualitative empirical
data findings.

*Codes: represents code assigned to each of the construct items. For example: ―ESO‖ stands for e-learning systems
outcome and the numbers attached to ESO stands for the consecutive items numbers such as ESO1 stands for item
number 1 representing the first construct item.

271| P a g e
Appendix 7b: The Model Constructs, Items, and Item Codes: For Instructors’
Questionnaire
Dimension 1: Learners‘ E-learning Factor (LEF)

*Codes Items Construct Sources


Measures/Aspects
It makes my teaching easy and Bhuasiri et al. (2012), Panda and Mishra (2007),
effective through e-learning Selim (2007), Hong (2002), Arbaugh (2002),
LEF1 systems when students get working Computer self-efficacy Arbaugh & Duray (2002); Hong (2002), Piccoli et
with computers and the internet is al. (2001), and self-development from qualitative
easy empirical data findings.
Bhuasiri et al. (2012), Sun et al. (2007), Panda and
It makes my teaching easy and Mishra (2007), Arbaugh (2002), Arbaugh & Duray,
Attitude towards e-
LEF2 effective when students believe (2002), Hong (2002), Piccoli et al. (2001), and self-
learning
learning via e-learning is enjoyable. development from qualitative empirical data
findings.
Cheng (2012), Ozkan & Koseler (2009), Sun et al.
It makes my teaching easy and (2007), Arbaugh, (2002), Arbaugh & Duray (2002),
effective when students believe via Attitude towards e- Hong (2002), Thurmond et al. (2002), Arbaugh
LEF3
e-learning they can learn based on learning (2002), Arbaugh & Duray (2002), Hong (2002),
their own pace Piccoli et al. (2001), and self-development from
qualitative empirical data findings.
It makes my teaching easy and Piccoli et al (2007), Panda and Mishra (2007),
effective when students believe e- Arbaugh (2002), Arbaugh & Duray (2002), Hong
Attitude towards e-
LEF4 learning makes their (2002), Piccoli et al. (2001), and self-development
learning
communication easier with their from qualitative empirical data findings.
instructors and their class mates
Over all, It makes my teaching easy Cheng (2012), Sun et al. (2007), Panda and Mishra
and effective when students Attitude towards e- (2007), Piccoli et al. (2001), and self-development
LEF5
believe e-learning is an interesting learning from qualitative empirical data findings.
experience
*Codes: represents code assigned to each of the construct items. For example: ―LEF‖ stands for Learners‟ E-
learning Factor and the number attached to LEF stands for the consecutive items numbers such as LEF1 stands for
item number 1 representing the first construct item.

Dimension 2: Instructor‘s E-learning Factor (IEF)

*Codes Items Construct Measures/ Sources


Aspects
I enjoy computers and internet Sun et al. (2007), Panda and Mishra (2007),
IEF1 while using for e-learning Computer self-efficacy Thurmond et al. (2002), and self-development
systems from qualitative empirical data findings.
I create an online environment Lwoga (2014), Demirkan et al. (2010), Gunn
conducive and enjoyable for Creating an e-learning (2010), Panda and Mishra (2007), and self-
IEF2 development from qualitative empirical data
teaching students via e-learning environment
findings.
systems
I find it easy to communicate Khan (2005), Thurmond et al. (2002), Piccoli et
IEF3 with my students via e-learning Computer self-efficacy al. (2001), and self-development from qualitative
systems empirical data findings.
I respond promptly to students‘ Lwoga (2014), Sun et al. (2007), Thurmond et al.
questions and concerns via e- Creating an e-learning (2002), Ozkan & Koseler, (2009), and self-
IEF4 development from qualitative empirical data
learning systems environment
findings.

272| P a g e
Over all, E- learning presents Cheng (2012), Bhuasiri et al. (2012), Ozkan and
Creating an e-learning Koseler (2009), and self-development from
IEF5 what is suitable for my teaching
environment qualitative empirical data findings.
style
*Codes: represents code assigned to each of the construct items. For example: ―IEF” stands for Instructors‟ E-
learning Factor and the number attached to LEF stands for the consecutive items numbers such as LEF1 stands for
item number 1 representing the first construct item.

Dimension 3: E-learning Content Quality (EQ)

*Codes Items Construct Measures/ Sources


Aspects
Lwoga (2014), Bhuasiri et al.
(2012), Lee (2010), Ozkan &
Koseler (2009),Sun et al.
Lecture notes are supported by multimedia tools
EQ1 Course quality (2007), Arbaugh (2000;2002),
(flash animations, simulations, videos, audios, etc.).
and self-development from
qualitative empirical data
findings.
Bhuasiri et al. (2012), Lee
(2010), Ozkan & Koseler
Supporting materials and illustrations within the e-
(2009), Sun et al. (2007), Panda
EQ2 learning system are useful and relevant in meeting Relevant content
and Mishra (2007), and self-
the course objective
development from qualitative
empirical data findings.
Lwoga (2014), Bhuasiri et al.
(2012), Lee (2010), Sun et al.
All course data within the e-learning system are
(2007), Panda and Mishra
EQ3 fully integrated and up-to-date to meet the course Up-to-date
(2007), Khan (2005) , and self-
objectives for my teaching needs
development from qualitative
empirical data findings.
Bhuasiri et al. (2012), Lee
(2010), Ozkan & Koseler
Abstract concepts (principles, formulas, rules, etc.)
(2009),Sun et al. (2007),
EQ4 are clearly illustrated and presented with concrete Course quality
Arbaugh (2000), and self-
specific examples.
development from qualitative
empirical data findings.
Lwoga (2014), Bhuasiri et al.
(2012), Lee (2010), Freeze
I update my teaching materials regularly in the e- (2010), Sun et al. (2007), Panda
EQ5 Up-to-date
learning system and Mishra (2007), and self-
development from qualitative
empirical data findings.
*Codes: represents code assigned to each of the construct items. For example: ―EQ‖ stands for E-learning Content
Quality and the number attached to EQ stands for the consecutive items numbers such as EQ1 stands for item
number 1 representing the first construct item.

Dimension 4: E-learning Systems Quality (ESQ)

* Codes Items Construct Measures/ Sources


Aspects
Lwoga (2014),Wu et al
(2010), Wang et al., (2007),
Arbaugh (2000;2002),
The e-learning system enables interactive Arbaugh & Duray (2002),
ESQ1 Systems interactivity
communication between the instructor and learners Hong (2002), and self-
development from qualitative
empirical data findings.

273| P a g e
Mohammadi (2015),
Bhuasiri et al. (2012), Sun et
The e-learning system has many useful al. (2007), DeLone &
Systems useful features and
ESQ2 functions/components for my teaching needs and is McLane (2003), Thurmond
availability
available all the time et al. (2002), and self-
development from qualitative
empirical data findings.
Lwoga (2014), Freeze et al.
(2010), Wang et al., (2007),
The e-learning system is both easy and user-friendly DeLone & McLane (2003),
ESQ3 Ease of use and user friendly
serving my teaching needs and self-development from
qualitative empirical data
findings.
Ozkan & Koseler (2009), Lin
(2007), Holsppale & lee-Post
The e-learning system provides security features
(2006), DeLone & McLane
ESQ4 (e.g., use of password) to protect my personal notes Secure
(2003), and self-development
and course materials
from qualitative empirical
data findings.
Lwoga (2014), Bhuasiri et al.
(2012), Khan (2005),
The response time of the e-learning system is DeLone & McLane (2003),
ESQ5 Systems response
reasonable. and self-development from
qualitative empirical data
findings.
*Codes: represents code assigned to each of the construct items. For example: ―ESQ‖ stands for E-learning Systems
Quality and the number attached to ESQ stands for the consecutive items numbers such as ESQ1 stands for item
number 1 representing the first construct item.

Dimension 5: Institutional Support Quality (INSPQ)

*Codes Construct Measures/Aspects


Items Sources
Maarshal (2012), Ozkan and (2009),
The University provides adequate and
Panda and Mishra (2007), Khan
consistent training in computer and
INSPQ1 Computer training (2005), Selim (2007), and self-
Internet technologies for use of e-
development from qualitative
learning features
empirical data findings.
Bhuasiri et al. (2012), Ozkan and
(2009); Sun et al. (2007), Panda and
The internet quality is sufficient to Availability of adequate e-
INSPQ2 Mishra (2007), and self-development
support e-learning learning infrastructure
from qualitative empirical data
findings.
Birch and Bernett (2009), Panda and
The institution has a rewards system to
Mishra (2007), Derouin et.al (2005),
INSPQ3 recognize instructors using e-learning Motivation
and self-development from qualitative
for teaching
empirical data findings.
The institution collects regular Khan (2005), and self-development
feedback from instructors to improve from qualitative empirical data
INSPQ4 Quality assurance mechanism
the quality of e-learning in the teaching findings.
and learning process
Panda and Mishra (2007), Khan
INSPQ5 The institution has an e-learning policy e-learning policy (2005), and self-development from
qualitative empirical data findings.
E-learning infrastructure (computer Marshal (2012), Junkins et al. (2011),
hardware and software, cameras, McPherson and Nunes (2008), Panda
microphone, scanners, Internet and Availability of adequate e- and Mishra (2007), Khan (2005),
INSPQ6
other support infrastructure) are readily learning infrastructure Alexander (2001), and self-
and consistently available for use of e- development from qualitative
learning systems for teaching empirical data findings.

274| P a g e
Balaban et al. (2013), Gunn (2011),
The technical support staff provides Bell and Bell (2005), Panda and
INSPQ7 prompt response while e-learning Technical support Mishra (2007), Khan (2005), and self-
system difficulties are encountered development from qualitative
empirical data findings.
The institution created a system to Panda and Mishra (2007), Khan
protect intellectual property rights of (2005), and self-development from
INSPQ8 Intellectual property protection
my course materials within the e- qualitative empirical data findings.
learning system
Marshal (2012), Demirkan et al.,
Overall, Institutional support related to (2010), Gunn (2010), Khan (2005),
INSPQ9 Overall, institutional support and self-development from qualitative
the e-learning system is satisfactory.
empirical data findings.
*Codes: represents code assigned to each of the construct items. For example: ―INSPQ‖ stands for Institutional
Support Quality and the number attached to INSPQ stands for the consecutive items numbers such as INSPQ1
stands for item number 1 representing the first construct item.

Dimension 6: Systems Usage (SU)

*Codes Items Construct Measures/ Source


Aspects
I frequently use e-learning for Frequency of use DeLone & Mclean (2003), McGill et
teaching al.(2003), Hobbs and Klobas (2003), and
SU1
self-development from qualitative empirical
data findings.
The e-learning system is reliable and Dependable Freeze et al. (2010), Wang et al., (2007),
SU2 dependable for teaching and self-development from qualitative
empirical data findings.
I use e-learning systems whenever it Frequency of use Lwoga (2014), Freeze (2010), Wang et al.,
SU3 is necessary for teaching (2007), and self-development from
qualitative empirical data findings.
*Codes: represents code assigned to each of the construct items. For example: ―SU‖ stands for Systems Usage and
the number attached to SU stands for the consecutive items numbers such as SU1 stands for item number 1
representing the first construct item.

Dimension 7: User Satisfaction (US)

*Codes Items Source


I am satisfied with the performance of e-learning Wang et al., (2007), Arbaugh (2000), Holsapple and Lee Post
US1 for my teaching (2006), DeLone and Mclean (2003), and self-development from
qualitative empirical data findings.
Use of e-learning for teaching is an enjoyable Lin (2007), Wang et al., (2007), Arbaugh (2000), and self-
US2
experience. development from qualitative empirical data findings.
In general, I am satisfied with the University‘s e- Lwoga (2014), Freeze et al (2010), Lin (2007), Wang et al.
US3 learning system for teaching. (2007), Holsapple and Lee Post (2006), and self-development
from qualitative empirical data findings.
*Items codes: represents code assigned to each of the construct items. For example: ―US‖ stands for User
satisfaction and the number attached to US stands for the consecutive items numbers such as US1 stands for item
number 1 representing the first construct item.

275| P a g e
Dimension 8: E-learning Systems Outcome (ESO)

Construct Measures/
*Codes Items Source
Aspects
Freeze et al (2010), Ozkan & Koseler
(2009), Wang et al., (2007), Holsapple and
E-learning systems helps me to teach
ESO1 Time saving Lee Post (2006), Zhang and Nunamaker,
the course in less time
(2003), and self-development from
qualitative empirical data findings.
Cheng (2012), Larsen et .al (2009),
Teaching via e-learning systems
McPherson and Nunes (2008), Wang et al.,
helps me to easily achieve my
ESO2 Time saving (2007), Holsapple & lee-Post (2006), Roca
teaching course objectives in less
et al. (2006), and self-development from
time
qualitative empirical data findings.
E-learning systems saves costs Larsen et .al (2009), Ozkan & Koseler
associated with frequent travel to the (2009), McPherson and Nunes (2008),
ESO3 University and printing student Cost saving Wang et al., (2007), Roca et al. (2006), and
assignments and other teaching self-development from qualitative empirical
course materials to students. data findings.
Cheng (2012), Eom (2011), Freeze et al.
(2010), Wang et al. (2007), Holsapple &
Overall, use of e-learning systems Enhanced teaching
ESO4 lee-Post 92006), McGill and Klobas (2008),
enhances my teaching performance performance
and self-development from qualitative
empirical data findings.
*Codes: represents code assigned to each of the construct items. For example: ―ESO‖ stands for e-learning systems
outcome and the numbers attached to ESO stands for the consecutive items numbers such as ESO1 stands for item
number 1 representing the first construct item.

276| P a g e
Appendix 8a: Instructor Questionnaire
Dear Instructor,
This is to kindly invite you to participate in this PhD research entitled: ―E-learning Systems
Success Model‖.

E-learning -in this research- means the use of e-learning system or online system which is being
used by your University in the teaching and learning process of the university. It includes course
content, online activities such as course assessments, and means of interaction (synchronous and
asynchronous) with the course content and instructor and students via an e-learning platform.
The purpose of this study is to measure the success of e-learning system in your University
through eight categories: E-learning Content Quality, E-learning Systems Quality, Learners‘ e-
learning factor, Instructor‘s e-learning factor, Institutional Support Quality, Systems Usage, User
Satisfaction, and E-learning Systems outcome.

The questionnaire includes Five Point Likert Scale to answer the questions: Strongly Agree
(SA) =5, Agree (A) =4, Neutral (N) =3, Disagree (D) = 2, Strongly Disagree (SD) = 1. Two
more options are included: Do not know (DK) and Not Applicable (NA). Do not Know:
represents when you are not clear or experience with the specific questions and Not Applicable:
is to represent when the specific question is not applicable in your University‘s e-learning
systems. The questions are designed in such a way that it can be answered simply by ticking (√)
on the appropriate spaces provided. The questionnaire won‘t take you more than 5 minutes but
your response will benefit the researcher, the community, and improving e-learning in the
country in the future.

I would greatly appreciate if you would respond to the questions sincerely and honestly. Your
responses and information will be kept strictly confidential. Thank you very much for your time
and cooperation!!

PhD candidate Principal Supervisor Co-Supervisor


Yonas Hagos Prof. Monica Garifield Dr. Salehu Anteneh
Addis Ababa University Bentley University Associate Professor, Addis Ababa University
E-mail: yonypeace@yahoo.com

Phone: 0966845657

277| P a g e
Part-I: Demographic Data

Gender □ Male □ Female

Age □ 18–28 □ 28–38 □ 38–48 □ > 48

PART II: Measurement categories: Please mark (√) in the right place:

Dimension 1: Learners‘ E-learning Factor

Items SD D N A SA NA DK

1. It makes my teaching easy and effective through e-learning systems


when students get working with computers and the internet is easy
2. It makes my teaching easy and effective when students believe
learning via e-learning is enjoyable.
3. It makes my teaching easy and effective when students believe via e-
learning they can learn based on their own pace
4. It makes my teaching easy and effective when students believe e-
learning makes their communication easier with their instructors and
their class mates
5. Over all, It makes my teaching easy and effective when students
believe e-learning is an interesting experience
Dimension 2: Instructors‘ E-learning Factor

Items SD D N A SA NA DK

1. I enjoy computers and internet while using for e-learning

2. I create an online environment conducive and enjoyable for teaching


students via e-learning
3. I update my teaching materials regularly in the e-learning system
4. I respond promptly to students‘ questions and concerns via e-learning
systems
5. Over all, E- learning presents what is suitable for my teaching style

Dimension 3: E-learning Content Quality

Items SD D N A SA NA DK

1. Lecture notes are supported by multimedia tools (flash animations,


simulations, videos, audios, etc.).
2. Supporting materials and illustrations within the e-learning system are
useful and relevant in meeting the course objective
3. All course data within the e-learning system are fully integrated and
up-to-date to meet the course objectives for my teaching needs
4. Abstract concepts (principles, formulas, rules, etc.) are clearly
illustrated and presented with concrete specific examples.
5. I update my teaching materials regularly in the e-learning system

278| P a g e
Dimension 4: E-learning Systems Quality

Items SD D N A SA NA DK

1. The e-learning system enables interactive communication


between the instructor and learners
2. The e-learning system has many useful
functions/components for my teaching needs and is available
all the time
3. The e-learning system is both easy and user-friendly to use
for my teaching needs
4. The e-learning system provides security features (e.g., use of
password) to protect my personal notes and course materials
5. The response time of the e-learning system is reasonable.

Dimension 5: Institutional Support Quality

Items S D N A SA NA DK
D

1. The University provides adequate and consistent training in


computer and Internet technologies for use of e-learning
features
2. The internet quality is sufficient to support e-learning
3. The institution has a rewards system to recognize
instructors using e-learning for teaching
4. The institution collects regular feedback from instructors
to improve the quality of e-learning in the teaching and
learning process
5. The institution has an e-learning policy
6. E-learning infrastructure (computer hardware and software,
cameras, microphone, scanners, Internet and other support
infrastructure) are readily and consistently available for use
of e-learning systems for teaching
7. The technical support staff provides prompt response while
e-learning system difficulties are encountered
8. The institution created a system to protect intellectual
property rights of my course materials within the e-learning
system
9. Overall, Institutional support related to the e-learning
system is satisfactory.

Dimension 6: Systems Usage

Items SD D N A SA NA DK

1. I frequently use e-learning for teaching


2. The e-learning system is reliable and dependable for
teaching
3. I use e-learning systems whenever it is necessary for
teaching

Dimension 7: User Satisfaction

Items SD D N A SA NA DK

1 I am satisfied with the performance of e-learning for my


teaching
2. Use of e-learning for teaching is an enjoyable experience.
3. In general, I am satisfied with the University‘s e-learning
system for teaching.

279| P a g e
Dimension 8: E-learning Systems Outcome

Items SD D N A SA NA DK

1. E-learning systems helps me to teach the course in less


time
2. Teaching via e-learning systems helps me to easily
achieve my teaching course objectives in less time
3. E-learning systems saves costs associated with
frequent travel to the University and printing student
assignments and other teaching course materials to
students.
4. Overall, use of e-learning systems enhances my
teaching performance

“End of the questionnaire!! Thank you for filling it out!”

280| P a g e
Appendix 8b: Student Questionnaire
Dear Student,
This is to kindly invite you to participate in this PhD research entitled: ―E-Learning Systems Success
Model‖.

E-learning -in this research- means the use of e-learning system (online system) which is being used by
your University in the teaching and learning process of the university. It includes course content, online
activities such as course assessments, and means of interaction (synchronous and asynchronous) with the
course content and instructor and students via an e-learning platform. The purpose of this study is to
measure the success of e-learning systems in your University through eight categories: E-learning Content
Quality, E-learning Systems Quality, Learners‘ e-learning factor, Instructor‘s e-learning factor,
Institutional Support Quality, Systems Usage, User Satisfaction, and E-learning Systems outcome.

The questionnaire includes Five Point Likert Scale to answer the questions: Strongly Agree (SA) =5,
Agree (A) =4, Neutral (N) =3, Disagree (D) = 2, Strongly Disagree (SD) = 1. Two more options are
included: Do not know (DK) and Not Applicable (NA). Do not Know: represents when you are not
clear or experience with the specific questions and Not Applicable: is to represent when the specific
question is not applicable in your University‘s e-learning systems. The questions are designed in such a
way that it can be answered simply by ticking (√) on the appropriate spaces provided. The questionnaire
won‘t take you more than 5 minutes but your response will benefit the researcher, the community, and
improving e-learning in the country in the future.

I would greatly appreciate if you would respond to the questions sincerely and honestly. Your responses and
information will be kept strictly confidential. Thank you very much for your time and cooperation!!

PhD candidate Principal Supervisor Co-Supervisor

Yonas Hagos Prof. Monica Garifield Dr. Salehu Anteneh

Addis Ababa University Bentley University Associate Professor, Addis Ababa University

E-mail: yonypeace@yahoo.com

Phone: 0966845657

281| P a g e
Part-I: Demographic Data

Gender □ Male □ Female

Age □ 18–28 □ 28–38 □ 38–48 □>


48

PART II : Measurement Categories : Please mark ( √) in the right place:

Dimension 1: Learners‘ E-learning Factor

Items SD D N A SA NA DK

1. I believe that working with computers is


easy
2. I believe that working with internet is
easy
3. I believe that through e-learning I can
learn anytime, anywhere.
4. I believe that e- learning is suitable for
my learning style
5. I believe that e-learning makes the
communication easier with my
instructor and other class mates
6. Over all, I believe that use of e-learning
for my learning provides an interesting
experience

Dimension 2: Instructor‘s E-learning Factor

Items SD D N A SA NA DK

1. The instructor created an online


environment conducive and enjoyable for
learning via e-learning
2. The instructor responds promptly to
questions and concerns via e-learning
systems of the University
3. I find it easy to communicate with the
instructor via e-learning
4. The instructor encourages me to interact
with other students by using e-learning
interactive tools
5. Over all, the instructor encourages me to
learn via e-learning

282| P a g e
Dimension 3: E-learning Content Quality

Items SD D N A SA NA DK

1. Lecture notes are supported by


multimedia tools (flash
animations, simulations, videos,
audios, etc.).
2. Abstract concepts (principles,
formulas, rules, etc.) are
clearly illustrated and easy to
understand
3. The course contents are well-
structured and complete to
meet the course objectives.
4. I found the course content on
the e-learning system to be
useful and relevant
5. Supporting materials, web-
links and illustrations are up-
to-date

Dimension 4: E-learning Systems Quality

Items SD D N A SA NA DK

1. The interface of the e-learning


system is attractive
2. The e-learning system enables
interactive communication between
me and the instructor and among
other students
3. The e-learning system provides security
features (e.g., use of password) to
protect my personal notes and course
materials
4. The e-learning systems is available all
the time
5. The e-learning system is both easy and
user-friendly to use for my learning
needs.
6. The e-learning system has many useful
functions/components for my learning
needs and is available all the time
7. The response time of the e-learning
system is reasonable.

283| P a g e
Dimension 5: Institutional Support Quality

Items SD D N A S N D
A A K
1. The University has a clear guideline
in order to use e-learning to enhance
my learning needs
2. e-learning infrastructure (computer
hardware and software, electric
supply and other relevant
infrastructure) are readily available
for use of e-learning systems
3. The University provides adequate
and consistent training in computer
and Internet technologies for use of
e-learning features
4. The University collects regular
feedback from students to control
the quality of e-learning systems in
the teaching and learning process
5. The Internet quality is sufficient to
support e-learning.
6. I get all the guidance and prompt
support I need from the technical
staff of the University while using
e-learning
7. Overall, institutional support related
to the e-learning system is
satisfactory

Dimension 6: Systems Usage

Items SD D N A SA NA DK

1. I frequently use e-learning for my


learning
2. The e-learning system is reliable and
dependable for my learning
3. I use e-learning whenever it is
necessary for my learning

Dimension 7: User Satisfaction

Items SD D N A SA NA DK

1. I am satisfied with the performance


of e-learning
2. Use of e-learning for my learning is
an enjoyable experience.
3. In general, I am satisfied with
University‘s e-learning system.

284| P a g e
Dimension 8: E-learning Systems Success

Items SD D N A SA NA DK

1. e-learning helps me to learn the


course , complete and submit my
assignments in less time
2. E-learning saves my travel costs to
the University and costs of printing
course materials
3. Learning via e-learning systems
enhances my understanding with my
courses in less time
4. Overall, use of e-learning
enhances my academic
performance

“End of the questionnaire!! Thank you for filling it out!”

285| P a g e
Appendix 9: Publications
Hagos, Y., Garifield, M., Anteneh, S. (2016). Measurement Factors Model For E-Learning Systems
Success. 2016 IEEE Tenth International Conference On Research Challenges In Information Science,
France, Grenoble (an IEEE conference).

Hagos, Y., Garifield, M., Anteneh, S. (2018).A Conceptual Model of e-learning systems success and its
implication for future research. 17th International Conference On Information Technology Based Higher
Education And Training (an IEEE conference). Portugal, Olhaeo.

286| P a g e

You might also like