You are on page 1of 22

A framework for the development of educational video: An empirical

approach

LefterisMoussiades

Computer and Informatics Engineering Department, Eastern Macedonia & Thrace


Institute of Technology, AgiosLoukas, 65404, Kavala, Greece

lmous@teiemt.gr

Ioannis Kazanidis

Computer and Informatics Engineering Department, Eastern Macedonia & Thrace


Institute of Technology, AgiosLoukas, 65404, Kavala, Greece

kazanidis@teiemt.gr

AnthiIliopoulou

Teacher of Informatics in Higher Education, Kavala, Greece


ailiolmous@yahoo.gr

Corresponding Author Ioannis Kazanidis, Computer and Informatics Engineering


Department, Eastern Macedonia & Thrace Institute of Technology, AgiosLoukas,
65404, Kavala, kazanidis@teiemt.gr, tel: +30 2510 462337

Dr.LefterisMoussiades has studied Business Administration, Information Technology, and


Didactics. He is an associate professor in the Eastern Macedonia & Thrace Institute of
Technology. He works on software development projects including ERP, software for scientific
computing, desktop and mobile applications for business, education, and gaming on most major
platforms.
Dr. Ioannis Kazanidis is Adjunct Assistant professor at Eastern Macedonia and Thrace Institute
of Technology, Greece and postdoc researcher at the Advanced Educational Technologies &
Mobile Applications Lab (AETMA Lab). He has published more than 75 papers in international
journals and conferences in the area of educational technology and e-learning.

IliopoulouAnthi has a degree in Computer’s Engineering and Informatics and a Master of


Education in e-learning. She has been working as a teacher of Informatics in Secondary
Education since 1999. Her research interests are educational video, Web 2.0 applications and
the didactics of informatics.
A framework for the development of educational video: An empirical
approach

Video has been widely used as an effective media for delivering varied
educational content. The enormous expansion of educational video is due to its
effectiveness and to the spectacular evolution of video construction technology.
Modern technology allows the rapid and economic development of educational
videos as Software Systems. Such videos may be fully developed using software
tools without the need for cameras or other expensive resources, e.g. actors. In
this paper, we propose a framework for the effective development of Educational
Video. The proposed framework consists of a methodology and a set of design
guidelines, both oriented towards the achievement of the learning objectives
related to an Educational Video. Experimentally, we compare videos produced
following the proposed framework with videos produced following a well-known
alternative methodology. Experimental results give rise to the success of our
approach and encourage further exploration.

Keywords: educational video;video development; design guidelines;quantitative


analysis

INTRODUCTION

Lately, Educational Videos developed by utilising software tools (Educational Video as

Software or EVS for short) are widely used as an effective media for delivering

educational content. Nowadays, video production and consumption rates are exploding.

Statistics provided by YouTube, the most widely known video hosting service, are

indicative: YouTube counts more than a billion users — almost one-third of all people

on the Web — and every day, there are billions of views on YouTube.

Many surveys, carried out in different levels of education, reveal the extended

and promising use of video. Historically, video has already been utilized in the training

of soldiers, since the Second World War. Since then, the technology that can be applied
to video construction, storage, and delivery has dramatically evolved. In the 60’s and

70’s television films were used in the classroom. In the 80’s several new forms of video

came along such as laserdiscs and the Video Home Systems (VHS), along with satellite

delivery, which all became common means for delivering instruction in distance

education networks. In the 90’s two-way video conferencing and camcorders made it

possible for educators and students to begin to create their own analog content. In 2000,

new technologies appeared including Digital Versatile Discs known as DVDs, Podcasts,

Streaming video, video hosting services like YouTube and Teacher Tube, Webcams and

Camera-enabled smartphones. In the first decade of the 21st century, classrooms were

sufficiently connected to the Internet in such a way that digital content could be

distributed globally. Thus, the broad use of Internet along with the domination of

YouTube has come to accredit the construction, distribution and use of educational

video.

Nowadays, video streaming and video on demand constitute new added values

to the educational video. Video streaming is often used for live lectures, because video

can be watched almost immediately, simulating real time (Whatley & Ahmad, 2007).

On the other hand, Video on Demand (VoD) facilitates teachers to find their preferential

educational content and watch it at their convenience, embedding it in the learning

process (Cruse, 2007). In addition, students appreciate the flexibility that VoD offers in

terms of scheduling and eliminating the need to physically attend seminars (Harrison,

2015).

Along with the technologies mentioned, a new concept called “low-cost

educational video” has made an appearance. The low-cost educational video is a short

video that supports streaming, has a specific goal and has been developed in a short

period of time, using few resources, whereas it may be combined with other material of
the course (Simo et al., 2010). The low-cost educational video is characterized by lower

budgets, shorter periods of development and simplified processes of video upgrading,

and allows efficient adaptation into the course according to the lecturer paradigm

(Bravo, Amante, Simo, Enache, & Fernandez, 2011).

In this context, we consider video as being a software system rather as an art

product, and we propose a framework that facilitates the construction of effective

educational videos. In practice, the motivation for the proposed framework resulted

from assigning students to build a video that presents their thesis. In many cases, the

time required to supervise the video construction was comparable to time supervision of

the thesis. At the same time, we had the opportunity to observe a number of errors that

students often make. Therefore, the proposal of a framework for the efficient

development of video came completely naturally. The proposed framework consists of a

methodology inspired from Software development models (Munassar&Govardhan,

2010) and a set of design guidelines. Both, the aforementioned methodology and the set

of guidelines, contribute to the video quality.

For the evaluation of the proposed framework, we employed a quasi-

experimental method. Results of experiments show that our approach is actually useful

and encourage us to pursue further research.

A Methodology for Video Development

In this section, we present the first part of the proposed framework which is a

Methodology for educational Video Development, or MVD for short. MVD operates on

an initial input content for which an educational video is required. The input content

includes text and may include pictures. MVD is based on the input content learning

objectives to guide the development of the educational video. Assume that the input
content consists of several individual subsections, all of which are of a different

significance to the learning objectives. At one end, some sections may be totally

unrelated to the learning objectives, i.e. jabbers. At the other end, we may encounter

highly concise and important sections. While experimenting by assigning students to

build video, we noticed that frequently, important sections of the input content

corresponded to only a few frames of the produced video, whereas in other cases,

relatively less important parts of the input content occupied large spaces in the produced

video. Typically, the imbalance between input content and produced video resulted in

the low effectiveness of the produced video. The objective of MVD is to reduce the

aforementioned imbalance.

Initially, we thought that instructional design methods, like the Addie Model

(Forest, 2014), ASSURE (Smaldino, Lowther & Russel, 2012) or Dick and Carey

Instructional model (Dick, et al. 2005) could be reclaimed towards MVD objective.

However, most of these models presuppose a particular target audience. Nevertheless,

such an assumption does not always hold true for educational video. Due to its nature,

the educational video is often provided in a public context and attracts learners with

very diverse backgrounds. Moreover, we need a methodology that is simple, is specific

to the development of video and can be used by people who are not tutors, e.g. a student

who wishes to make a video about her/his thesis. Therefore, we propose a novel

methodology, which similarly to most software development processes is a cyclic

process. As figure 1 shows, MVD consists of five steps: Determination of general

learning objectives, Determination of specific learning objectives, Video Construction,

Evaluation and Reformation.


Figure 1: The five stages of the MVD cycling process.

We now present the five stages of MVD in more detail:

(1) Determination of general learning objectives. Consider the input content as a

learning unit and determine its general orientation, expressed by a set of learning

objectives (Arreola, 1998). Take into consideration the input content as a whole.

However, pay special attention to the following parts of the input content (if

they exist) such as title, analysis of objectives, motivation, and the list of

keywords.

(2) Determination of specific learning objectives. Analyze the input content so as to

extract specific learning objectives, i.e. observable behaviors (Kennedy, 2006).

Note that specific objectives are used both for the definition of the lesson

content as well as for the methodology and type of evaluation (Zavlanos, 1998).

Similarly, specific objectives are the basis for structuring the video. Here, we

propose a technique to facilitate the determination of specific objectives. The

proposed technique is inspired by conventional content analysis (Hsieh &

Shannon, 2005), which is a kind of qualitative content analysis. In many cases

content analysis is used to provide “visually oriented end-user interfaces that


support searching, browsing and summarization of the media contents” (Ali,

Lee, & Smeaton, 2011). More precisely, we propose the following steps:

(a) Read the input content word by word and highlight the exact words or

phrases from the text that appear to capture key thoughts or concepts or

otherwise correspond to a specific learning objective.

(b) Provide an initial labelling for the highlighted words and phrases. It is

possible that emerging labels can describe more than one highlighted

word or phrase.

(c) Where it is possible, determine relations between labels. For example, in

a text describing the function of a software system, two labels have

emerged: wait for state A and wait for state B. Obviously, these labels

correspond to two distinct wait states of the software. What is their

relationship? For example, will the program enter wait for state B

immediately after it exits to wait for state A?

(d) Based on the labels and their relationships, formulate the specific

learning objectives.

(3) Construct a video, so that a specific set of frames corresponds to each specific

learning outcome. Support the Signaling Principle (Mayer, 2001) by adding an

introductory section to inform the student about the general learning outcomes

and the overall structure of the video.

(4) Evaluation. Use the pre-experimental design or the pretest-posttest design

(Cohen, Manion, & Morrison, 2003) to evaluate the produced video. Note that

the experimental method is generally considered to be a most suitable method

for extracting conclusions about instructional methods (Odle& Mayer, 2009).

The procedure of the experiment is as follows:


(a) Construct a questionnaire consisting of closed type questions such as

True-False, Multiple Choice, and Matching. The questionnaire must

include one or more questions for each specific learning objective. Also,

ensure that the questionnaire includes specific questions to evaluate the

effectiveness of the introductory section (recall that the introductory

section has been designed based on the general learning objectives).

Also, there is a correspondence between learning objectives and video

frames. Therefore, there is a correspondence between questions and

video frames which is very useful when corrections in the produced

video are required.

(b) Provide the audience with the aforementioned questionnaire. Afterward,

let the audience watch the MVD video. Subsequently, the audience once

more takes the original test. Then compare results between the pre-test

and the post-test. The improvement between the pre-test and the post-test

can be attributed to the video. The above design is compatible with the

“criterion reference evaluation” which suggests that we measure results

of students’ evaluations and compare them with the learning objectives

set at the beginning of the lesson.

(5) Reformation. Reform the produced video by analysing the results of the tests.

Since there is a correspondence between the video frames and the questions we

can identify specific parts of the produced video that should be reformed by

analysing the test results.

Design guidelines

In correspondence to design guidelines for educational software and multimedia (Beale


&Sharples, 2002; Lee & Boling, 1999; Plass, Homer, & Hayward, 2009), the guidelines

that we propose relate to the achievement of learning objectives. These guidelines refer

mainly to the video construction stage of MVD.

(1) Be brief yet inclusive. Take into consideration that a long video may tire the

learner (Liao, 2012; Meseguer et al., 2017). Note that this guideline is

compatible with the Coherence Principle (Clark and Mayer, 2011; Brame, 2016)

(2) Use conversational style. Known as the personalization principle, the use of

conversational style has been shown to be very effective in students' learning

(Mayer, 2008).

(3) Pay attention to the aesthetics of the video. Note that (Euricon, 2008) suggests

the careful alternation of colors, images, and sound so as to develop learner’s

aesthetic conception.

(4) Control the rate of speech in regards to fast or slow as it is considered that

student engagement increases proportionally to speaking rate (Guo et al, 2014)

(5) Define the target audience. Deciding exactly which people are supposed to learn

should precede the construction of educational video (Schwartz & Hartman,

2006). Obviously, many design characteristics can be influenced by the target

audience. Likewise, the definition of the usage of educational video constitutes a

keystone for its pedagogical framework (Koumi, 2006).

(6) Don’t overload video frames with text. Use text only when it is essential, e.g. for

titles. Keep in mind that video should be based mainly on audio and visual

messages and secondly on text. As (Shepard & Cooper, 1982) mentioned, keep

images in balance to text because visual cues create imagery critical to memory

processes.
(7) Use narration. As modality principle declares, verbal speech is more efficient

compared to printed text for interpretation of images (Mayer, 2001).

(8) Provide Images that are uncluttered and simple (Denning, 1992)

(9) Synchronize audio and visual messages in compliance with the Temporal

Contiguity Principle (Mayer, 2008). Take care so that the visual and audio part

of a message appears simultaneously. Alternatively, the visual part may precede

the audio part for a short time. This schema stimulates learner’s curiosity,

reinforces discovery learning (Bruner, 1963) and helps the learner to provide

answers and proceed with the personal construction of knowledge

(Leidner&Jarvenpaa, 1995).

(10) Support variety. Combine moving images, slow motion or real time motion,

visual and sound effects, as declared by (Koumi, 2006). Use humour appropriate

to the learner’s age (Denning, 1992). Additional elements like the timely

occurrence of music may provide variety as well (Koumi, 2006). These

characteristics impair additional value to the video used in the learning process.

However, avoid the usage of overly emotionally stimulating music so as not to

take away from the learning process (Denning, 1992).

(11) Control the pace. Note that too fast pacing may detract the learner from

understanding a message whereas slow pacing may lose learner’s interest too.

(12) Follow the signalling principle (Mayer, 2001). According to the cognitive theory

of multimedia learning, signalling can guide the learner’s attention toward the

essential material, thereby minimizing the learner’s processing of extraneous

material (Mayer, 2008).

(13) Provide introductory notes or knowledge background related to the content, as a

recall of existing knowledge, will be facilitated and the potential of learning will
increase (Kennedy, Petrovic, &Keppell, 1998). In addition, learners’ anxiety

will be reduced and they will be able to judge the important goals of instruction

(Overbaugh, 1994).

(14) Organize video in sections, as information provided in logical chunks, helps the

viewer to mentally organize topic (Denning, 1992). Emphasize the main learning

points at the end of each section and give clues about what is next (Koumi,

2006). Maintain a smooth flow and sequencing of information so that minimal

effort to make links between information is required (Kennedy et al., 1998),

(Peck &Hannafin, 1998). Finally, summarize key features at the end of the

video, helping students to stand back from the story (Denning, 1992).

(15) Use captions to make the video available in another language or to people

hearing impaired.

Experimentation

Design and procedure

The purpose of this study is to examine if the proposed framework for the development

of educational video affects students' performance either positively or negatively. More

precisely, the research question that guides this study is as follows:

Is there any significant difference in students' performance who watched the

video that was produced with MVD in comparison with the performance of students

who watched a video that was produced based on the (Brame, 2016) recommendations

for effective educational videos?

The experiment was designed in such a way as to resolve possible problems due

to internal validity threats as defined by (Campbell, Stanley, & Gage, 1963). History,
maturation, and experimental mortality threats are eliminated because of the short

execution period of the experiment. Pretest and posttest have the same set of questions

in order to confront instrumentation threat.

The idea of the true experiment was rejected because of the difficulty to

randomly choose the group of students participating in the study (Ross & Morrison,

2003). In cases where random selection of subjects is not possible, quasi-experiment

introduces control group and reassures internal validity (Cohen et al., 2003).

Therefore, we commissioned a group of two undergraduate students of Eastern

Macedonia and Thrace Institute of Technology Information Technology department, to

produce two videos with common input content related to vehicle monitoring software.

Note that both students had no prior experience in video development. The first video

(available at https://www.youtube.com/watch?v=01MFUFM7pUsn), which we call

Balanced Video (BV) was produced according to the proposed methodology and

guidelines. The second video (available at https://www.youtube.com/watch?

v=BHXiukvwgnA), which we call Simple Video (SV), was produced according to the

(Brame, 2016) recommendations for effective educational videos.

The experiment had two equal (in terms of knowledge) groups of participants,

each of which consisted of students. One group watched the SV and the other watched

the BV. Students’ performance was evaluated according to their grades on the same pre-

test (before watching the video) and post-test (after watching the video). The instrument

for mining the results was administrated as a questionnaire (available upon request) in

hard copies and it consisted of 14 questions that were extracted from an input content of

one thousand words.


Measurements

The measuring tools of this study included the pre-test and post-test questionnaires.

The pre-test aimed to evaluate the students' prior knowledge in the produced video

domain. It consisted of fourteen fill-in-the-blank items. The post-test had exactly the

same items in order to check equivalently students’ knowledge of the subject. The

participants were students in the 2nd year of the Computer and Informatics Engineering

Department at the Eastern Macedonia and Thrace Institute of Technology. Initially, all

students answered a pre-test in order to check their knowledge on the subject. The

students with too low or too high grades were excluded from the experimentation

procedure in order to avoid statistical regression. At the end of this procedure, 80

students were selected to participate in the experiment. The selected students were

separated into two groups of 40 students each. A t-test on the students’ pre-test grades

took place in order to examine if the knowledge of the two groups could be considered

as equivalent. According to the results of the pre-test statistical analysis with t-test, no

significant differences were found on the prior knowledge of each group since the p-

value equals to 0.858>0.05. Therefore, both groups were considered as equal as regards

to knowledge.

After that, the experiment date was set. In a one hour session, the first group

watched the SV whereas the second group watched the BV. After that, both groups were

asked to answer the post-test. Finally, a discussion took place between the researchers

and the participants of each group.

Results

The target sample composed of 80 students with a mean age of 20.8 years, who

voluntary agreed to participate in this experiment. The students’ post-test grades were
collected. Table 1 displays the grades of students, comparing the average grade of

students who watch the BV, with the average post-test grade of those who watch the

SV. A t-test on the post-test grades with an alpha level at 0.05 was performed to

compare the achievements of BV group students, with the achievements of the SV

group students, in order to check research question. Before employing the t-test,

Levene’s homogeneity test was conducted. The result showed that the F value was equal

to 0.09 (p > .05). This indicated that the homogeneity test has not achieved statistical

significance; therefore, t-test could be applied. The t-test shows that the grades of those

who watch the BV were significantly higher than the grades of the students who watch

the SV. The difference is statistically significant when using t-test at the 95%

confidence level: t(78) =-27.9, p=.001<.05.

Table 1: Average post-test grades for students by group.

Group Average exam Standard Standard Error N

grade Deviation Mean

SV 61.0943 2.7278 .4313 40

BV 76.7090 2.2407 .3543 40

Students’ results in each question are presented in figures 2 and 3.


BVG
100
90
80
70
60
50
40
30
20
10
0
q1 q2 q3 q4 q5 q6 q7 q8 q9 q10 q11 q12 q13 q14 Mean

pre-test post-test knowledge-gained

Figure 2: Performance of BVG.

SVG
100
90
80
70
60
50
40
30
20
10
0
q1 q2 q3 q4 q5 q6 q7 q8 q9 q10 q11 q12 q13 q14 Mean

pre-test post-test knowledge-gained

Figure 3: Performance of SVG.

Figure 2 presents pre-test and post-test performance for each question (q1-q14)

for the BVG using a scale from 0 to 100. In addition, it presents the knowledge gained

as the difference between post-test and pre-test performance. Finally, figure 2 includes

the mean of the pre-test performance, the post-test performance as well as the mean of

the knowledge gained. Similarly, figure 3 presents students' results for the SVG.

Comparing the two figures, we can see that pre-test performance is similar for both
groups whereas, in the post-test, BVG dominates on SVG by an average of nearly 15

units. We also note that the knowledge gained in question 3 of figure 2 is quite low.

However, this is due to the high pre-test score which leaves a small window for

improvement. In contrast, in question 9 both the knowledge gained and the pre-test

knowledge is quite low. This is a sign that the video frames corresponding to question 9

may need improvement.

Discussion and Conclusion

As we have already presented, the knowledge gained by students who watched the BV

is higher than the knowledge gained by students who watched the SV. These results

confirm our preliminary evaluation (Author et al. 2016) which took place in October

2014 and show that the new balanced video influences students’ knowledge acquisition.

Based on an analysis of the videos' content as well as on an interview with its

developers, we assume that the following factors were crucial in the aforementioned

result.

(1) The BV was developed following a particular methodology plus a set of design

guidelines. In contrast, SV was developed simply following a set of

recommendations. Therefore, our approach is more comprehensive compared to

the Brame's recommendations. In addition, we suggest that the particular context

that is delimited by the MVD's application simplifies the proper development of

the proposed design guidelines.

(2) It is evident that the reformation of video that is provided in MVD is very

helpful to the improvement of video quality. This view was supported by the

video developers, although the methodology was not precisely followed as it is

indicated by the low performance on question 9.


(3) Another critical factor is that the MVD guides the video producers to clarify the

general and specific learning objectives, which are then taken into consideration

in a concrete manner in the video construction phase as well as in the application

of the proposed design guidelines.

(4) The balance between the input content and the produced video was evaluated to

be better in the BV than in the SV.

(5) Finally, BV was aesthetically better than SV.

In summary, the BV was accurate in its objectives and in direct connection with

them, and the original text while it achieved a more qualitative result than SV. In

this context, it was easier for students to watch and follow and thus learn better

from. Therefore, it seems that the proposed framework can help video producers

create better educational videos with more favorable educational outcomes.

However, the following important limitations should be considered when

interpreting the current findings:

(1) The number of participants in the experiment was relatively low (80). Therefore,

it would be advisable that an experiment with more users be taken place in the

future.

(2) The experiment evaluated the subjects’ responses comparing one BV versus one

SV. Both videos had been constructed by the same group of producers. The

process should be repeated with more videos constructed by a variety of

creators.

(3) Since all the experiment subjects were students from a specific university, their

characteristics may differ from students of other universities. Therefore, a more

extensive experimentation could be beneficial.


Finally, future studies will also allow the verification and improvement of the

proposed framework.

REFERENCES

Author 1, Author 2 & Author 3, (2016). Paper details are not displayed according to
journal submission rules.
Ali, N. M., Lee, H., & Smeaton, A. F. (2011).Use of content analysis tools for visual
interaction design. In Visual Informatics: Sustaining Research and
Innovations (pp. 74-84). Springer Berlin Heidelberg.
Arreola, R. A. (1998). Writing Learning Objectives.  A teaching resource document
from the office of the vice chancellor for planning and academic support.
Retrieved from
http://nexus.hs-bremerhaven.de/Library.nsf/0946dbe6a3c341e8c12570860044165f/
3582b289612f6232c12573a2005aa4d8/$FILE/Learning_Objectives.pdf
Beale, R., &Sharples, M. (2002).Design guide for developers of educational
software. British Educational Communications and Technology Agency (Becta).
Brame, C. (2016). Effective educational videos.ACT teaching Guides, Center for
Teaching.Vanderbilt University. Retrieved from
https://cft.vanderbilt.edu/guides-sub-pages/effective-educational-videos/
Bravo, E., Amante, B., Simo, P., Enache, M., & Fernandez, V. (2011). Video as a new
teaching tool to increase student motivation.In Proceedings Global Engineering
Education Conference (EDUCON), 2011 IEEE (pp. 638-642).IEEE.
Bruner, J. S. (1963). The Process of education. New York: Vintage Books
Campbell, D. T., Stanley, J. C., & Gage, N. L. (1963). Experimental and quasi-
experimental designs for research. Boston: Houghton Mifflin.
Clark, R. C., & Mayer, R. E. (2011).E-Learning and the Science of Instruction: Proven
Guidelines for Consumers and Designers of Multimedia Learning (3rd ed.). San
Francisco, CA: John Wiley & Sons.
Cohen, L., Manion, L. & Morrison, K. (2003).Research Methods in Education (5th ed.).
London, UK: Routledge Falmer.
Cruse, E. (2007).Using educational video in the classroom: Theory, research and
practice. Wynnewood, PA: Library Video Company. Retrieved from
http://www.libraryvideo.com/articles/article26.asp
Denning, D. (1992). Video in theory and practice: Issues for classroom use and teacher
video evaluation. Retrieved from
http://www.ebiomedia.com/downloads/VidPM.pdf
Dick, W., Carey, L. & Carey J. (2005).The Systematic Design of Instruction (6th ed.).
Allyn& Bacon. pp. 1–12. ISBN 0-205-41274-2.
EURICON, (2008).Evaluation and utilization criteria of educational material. OEPEK,
Greece, Retrieved from http://repository.edulll.gr/edulll/retrieve/3460/1024.pdf (in
Greek)
Forest, E. (2014). The ADDIE Model: Instructional Design, Educational Technology.
Retrieved from http://educationaltechnology.net/the-addie-model-instructional-
design/
Guo, P., Kim, J, & Robin, R. (2014). How video production affects student engagement:
An empirical study of MOOC videos. ACM Conference on Learning at Scale
(L@S 2014).Retrieved from http://groups.csail.mit.edu/uid/other-pubs/las2014-
pguo-engagement.pdf.
Harrison, D. (2015). Assessing Experiences with Online Educational Videos:
Converting Multiple Constructed Responses to Quantifiable Data.The
International Review of Research in Open and Distance Learning, 16(1), 168-
192. Retrieved from
http://www.irrodl.org/index.php/irrodl/article/view/1998/3205
Hsieh H.-F. & Shannon S.E. (2005) Three Approaches to Qualitative Content Analysis.
In Qualitative Health Research, 15(9). Retrieved from
http://www.sagepub.com/millsandbirks/study/Journal%20Articles/Qual%20Health%20Res-
2005-Hsieh-1277-88.pdf
Kennedy, D. (2006). Writing and using learning outcomes: a practical guide. Cork:
University College Cork.
Kennedy, G., Petrovic, T., &Keppell, M. (1998). The development of multimedia
evaluation criteria and a program of evaluation for computer aided learning.
ASCILITE, 98, 407.
Koumi, J. (2006). Designing Video and Multimedia for Open and Flexible Learning.
London: Routledge
Lee, S. H., & Boling, E. (1999). Screen design guidelines for motivation in interactive
multimedia instruction: A survey and framework for designers. Educational
technology, 39(3), 19-26.
Leidner, D. E., &Jarvenpaa, S. L. (1995). The use of information technology to enhance
management school education: A theoretical view. MIS quarterly, 19(3), 265-
291.
Liao, W. C. (2012). Using short videos in teaching a social science subject: Values and
challenges. Journal of the NUS Teaching Academy, 2, 42-55.
Mayer, R. (2001). Multimedia learning. Cambridge: Cambridge University Press.
Mayer, R. (2008). Applying the science of learning: Evidence-based principles for the
design of multimedia instruction. Cognition and Instruction, 19, 177-213.
Meseguer-Martinez, A., Ros-Galvez, A., & Rosa-Garcia, A. (2017). Satisfaction with
online teaching videos: A quantitative approach. Innovations in Education and
Teaching International, 54(1), 62-67. 
Munassar, N. M. A., &Govardhan, A. (2010).A comparison between five models of
software engineering. IJCSI, 5, 95-101.
Odle, T., & Mayer, R. (2009).Experimental Research. Education.com. Retrieved from
http://www.education.com/reference/article/experimental-research
Overbaugh, R. C. (1994). Research-based guidelines for computer-based instruction
development. Journal of Research on Computing in Education, 27(1), 29-47.
Peck, K. L., &Hannafin, M. J. (1988). The design, development & evaluation of
instructional software.Indianapolis, IN: Macmillan Publishing Co. Inc.
Plass, J. L., Homer, B. D., & Hayward, E. O. (2009). Design factors for educationally
effective animations and simulations. Journal of Computing in Higher
Education, 21(1), 31-61.
Ross, S. M., & Morrison, G. R. (2003).Experimental Research Methods. In D. Jonassen
(ed.), Handbook of research of educational communications and Technology
(2nd ed.)(pp. 1021- 1043). Mahwah, NJ: Lawrence Erlbaum Associates
Schwartz, D. L., & Hartman, K. (2007). It is not television anymore: Designing digital
video for learning and assessment. Video research in the learning sciences, 335-
348.
Shepard, R. N., & Cooper. L. A. (1982). Mental images and their transformations.
Cambridge, MA: MIT Press.
Simo, P., Fernandez, V., Algaba, I., Salan, N., Enache, M., Albareda-Sambola, M.,
&Rajadell, M. (2010). Video stream and teaching channels: quantitative analysis
of the use of low-cost educational videos on the web. Procedia-Social and
Behavioral Sciences, 2(2), 2937-2941.
Smaldino, S., Lowther, D., & Russel, J. (2012), Instructional Technology and Media for
Learning, 10th Edition, Pearson Education.
Whatley, J., & Ahmad, A. (2007). Using video to record summary lectures to aid
students' revision. Interdisciplinary Journal of E-Learning and Learning
Objects, 3(1), 185-196.
Zavlanos, M. (1998).Didactics (2nd ed.). “Hellin” Publications.

You might also like