Professional Documents
Culture Documents
approach
LefterisMoussiades
lmous@teiemt.gr
Ioannis Kazanidis
kazanidis@teiemt.gr
AnthiIliopoulou
Video has been widely used as an effective media for delivering varied
educational content. The enormous expansion of educational video is due to its
effectiveness and to the spectacular evolution of video construction technology.
Modern technology allows the rapid and economic development of educational
videos as Software Systems. Such videos may be fully developed using software
tools without the need for cameras or other expensive resources, e.g. actors. In
this paper, we propose a framework for the effective development of Educational
Video. The proposed framework consists of a methodology and a set of design
guidelines, both oriented towards the achievement of the learning objectives
related to an Educational Video. Experimentally, we compare videos produced
following the proposed framework with videos produced following a well-known
alternative methodology. Experimental results give rise to the success of our
approach and encourage further exploration.
INTRODUCTION
Software or EVS for short) are widely used as an effective media for delivering
educational content. Nowadays, video production and consumption rates are exploding.
Statistics provided by YouTube, the most widely known video hosting service, are
indicative: YouTube counts more than a billion users — almost one-third of all people
on the Web — and every day, there are billions of views on YouTube.
Many surveys, carried out in different levels of education, reveal the extended
and promising use of video. Historically, video has already been utilized in the training
of soldiers, since the Second World War. Since then, the technology that can be applied
to video construction, storage, and delivery has dramatically evolved. In the 60’s and
70’s television films were used in the classroom. In the 80’s several new forms of video
came along such as laserdiscs and the Video Home Systems (VHS), along with satellite
delivery, which all became common means for delivering instruction in distance
education networks. In the 90’s two-way video conferencing and camcorders made it
possible for educators and students to begin to create their own analog content. In 2000,
new technologies appeared including Digital Versatile Discs known as DVDs, Podcasts,
Streaming video, video hosting services like YouTube and Teacher Tube, Webcams and
Camera-enabled smartphones. In the first decade of the 21st century, classrooms were
sufficiently connected to the Internet in such a way that digital content could be
distributed globally. Thus, the broad use of Internet along with the domination of
YouTube has come to accredit the construction, distribution and use of educational
video.
Nowadays, video streaming and video on demand constitute new added values
to the educational video. Video streaming is often used for live lectures, because video
can be watched almost immediately, simulating real time (Whatley & Ahmad, 2007).
On the other hand, Video on Demand (VoD) facilitates teachers to find their preferential
process (Cruse, 2007). In addition, students appreciate the flexibility that VoD offers in
terms of scheduling and eliminating the need to physically attend seminars (Harrison,
2015).
educational video” has made an appearance. The low-cost educational video is a short
video that supports streaming, has a specific goal and has been developed in a short
period of time, using few resources, whereas it may be combined with other material of
the course (Simo et al., 2010). The low-cost educational video is characterized by lower
and allows efficient adaptation into the course according to the lecturer paradigm
educational videos. In practice, the motivation for the proposed framework resulted
from assigning students to build a video that presents their thesis. In many cases, the
time required to supervise the video construction was comparable to time supervision of
the thesis. At the same time, we had the opportunity to observe a number of errors that
students often make. Therefore, the proposal of a framework for the efficient
2010) and a set of design guidelines. Both, the aforementioned methodology and the set
experimental method. Results of experiments show that our approach is actually useful
In this section, we present the first part of the proposed framework which is a
Methodology for educational Video Development, or MVD for short. MVD operates on
an initial input content for which an educational video is required. The input content
includes text and may include pictures. MVD is based on the input content learning
objectives to guide the development of the educational video. Assume that the input
content consists of several individual subsections, all of which are of a different
significance to the learning objectives. At one end, some sections may be totally
unrelated to the learning objectives, i.e. jabbers. At the other end, we may encounter
build video, we noticed that frequently, important sections of the input content
corresponded to only a few frames of the produced video, whereas in other cases,
relatively less important parts of the input content occupied large spaces in the produced
video. Typically, the imbalance between input content and produced video resulted in
the low effectiveness of the produced video. The objective of MVD is to reduce the
aforementioned imbalance.
Initially, we thought that instructional design methods, like the Addie Model
(Forest, 2014), ASSURE (Smaldino, Lowther & Russel, 2012) or Dick and Carey
Instructional model (Dick, et al. 2005) could be reclaimed towards MVD objective.
such an assumption does not always hold true for educational video. Due to its nature,
the educational video is often provided in a public context and attracts learners with
to the development of video and can be used by people who are not tutors, e.g. a student
who wishes to make a video about her/his thesis. Therefore, we propose a novel
learning unit and determine its general orientation, expressed by a set of learning
objectives (Arreola, 1998). Take into consideration the input content as a whole.
However, pay special attention to the following parts of the input content (if
they exist) such as title, analysis of objectives, motivation, and the list of
keywords.
Note that specific objectives are used both for the definition of the lesson
content as well as for the methodology and type of evaluation (Zavlanos, 1998).
Similarly, specific objectives are the basis for structuring the video. Here, we
Lee, & Smeaton, 2011). More precisely, we propose the following steps:
(a) Read the input content word by word and highlight the exact words or
phrases from the text that appear to capture key thoughts or concepts or
(b) Provide an initial labelling for the highlighted words and phrases. It is
possible that emerging labels can describe more than one highlighted
word or phrase.
emerged: wait for state A and wait for state B. Obviously, these labels
relationship? For example, will the program enter wait for state B
(d) Based on the labels and their relationships, formulate the specific
learning objectives.
(3) Construct a video, so that a specific set of frames corresponds to each specific
introductory section to inform the student about the general learning outcomes
(Cohen, Manion, & Morrison, 2003) to evaluate the produced video. Note that
include one or more questions for each specific learning objective. Also,
let the audience watch the MVD video. Subsequently, the audience once
more takes the original test. Then compare results between the pre-test
and the post-test. The improvement between the pre-test and the post-test
can be attributed to the video. The above design is compatible with the
(5) Reformation. Reform the produced video by analysing the results of the tests.
Since there is a correspondence between the video frames and the questions we
can identify specific parts of the produced video that should be reformed by
Design guidelines
that we propose relate to the achievement of learning objectives. These guidelines refer
(1) Be brief yet inclusive. Take into consideration that a long video may tire the
learner (Liao, 2012; Meseguer et al., 2017). Note that this guideline is
compatible with the Coherence Principle (Clark and Mayer, 2011; Brame, 2016)
(2) Use conversational style. Known as the personalization principle, the use of
(Mayer, 2008).
(3) Pay attention to the aesthetics of the video. Note that (Euricon, 2008) suggests
aesthetic conception.
(4) Control the rate of speech in regards to fast or slow as it is considered that
(5) Define the target audience. Deciding exactly which people are supposed to learn
(6) Don’t overload video frames with text. Use text only when it is essential, e.g. for
titles. Keep in mind that video should be based mainly on audio and visual
messages and secondly on text. As (Shepard & Cooper, 1982) mentioned, keep
images in balance to text because visual cues create imagery critical to memory
processes.
(7) Use narration. As modality principle declares, verbal speech is more efficient
(8) Provide Images that are uncluttered and simple (Denning, 1992)
(9) Synchronize audio and visual messages in compliance with the Temporal
Contiguity Principle (Mayer, 2008). Take care so that the visual and audio part
the audio part for a short time. This schema stimulates learner’s curiosity,
reinforces discovery learning (Bruner, 1963) and helps the learner to provide
(Leidner&Jarvenpaa, 1995).
(10) Support variety. Combine moving images, slow motion or real time motion,
visual and sound effects, as declared by (Koumi, 2006). Use humour appropriate
to the learner’s age (Denning, 1992). Additional elements like the timely
characteristics impair additional value to the video used in the learning process.
(11) Control the pace. Note that too fast pacing may detract the learner from
understanding a message whereas slow pacing may lose learner’s interest too.
(12) Follow the signalling principle (Mayer, 2001). According to the cognitive theory
of multimedia learning, signalling can guide the learner’s attention toward the
recall of existing knowledge, will be facilitated and the potential of learning will
increase (Kennedy, Petrovic, &Keppell, 1998). In addition, learners’ anxiety
will be reduced and they will be able to judge the important goals of instruction
(Overbaugh, 1994).
(14) Organize video in sections, as information provided in logical chunks, helps the
viewer to mentally organize topic (Denning, 1992). Emphasize the main learning
points at the end of each section and give clues about what is next (Koumi,
(Peck &Hannafin, 1998). Finally, summarize key features at the end of the
video, helping students to stand back from the story (Denning, 1992).
(15) Use captions to make the video available in another language or to people
hearing impaired.
Experimentation
The purpose of this study is to examine if the proposed framework for the development
video that was produced with MVD in comparison with the performance of students
who watched a video that was produced based on the (Brame, 2016) recommendations
The experiment was designed in such a way as to resolve possible problems due
to internal validity threats as defined by (Campbell, Stanley, & Gage, 1963). History,
maturation, and experimental mortality threats are eliminated because of the short
execution period of the experiment. Pretest and posttest have the same set of questions
The idea of the true experiment was rejected because of the difficulty to
randomly choose the group of students participating in the study (Ross & Morrison,
introduces control group and reassures internal validity (Cohen et al., 2003).
produce two videos with common input content related to vehicle monitoring software.
Note that both students had no prior experience in video development. The first video
Balanced Video (BV) was produced according to the proposed methodology and
v=BHXiukvwgnA), which we call Simple Video (SV), was produced according to the
The experiment had two equal (in terms of knowledge) groups of participants,
each of which consisted of students. One group watched the SV and the other watched
the BV. Students’ performance was evaluated according to their grades on the same pre-
test (before watching the video) and post-test (after watching the video). The instrument
for mining the results was administrated as a questionnaire (available upon request) in
hard copies and it consisted of 14 questions that were extracted from an input content of
The measuring tools of this study included the pre-test and post-test questionnaires.
The pre-test aimed to evaluate the students' prior knowledge in the produced video
domain. It consisted of fourteen fill-in-the-blank items. The post-test had exactly the
same items in order to check equivalently students’ knowledge of the subject. The
participants were students in the 2nd year of the Computer and Informatics Engineering
Department at the Eastern Macedonia and Thrace Institute of Technology. Initially, all
students answered a pre-test in order to check their knowledge on the subject. The
students with too low or too high grades were excluded from the experimentation
students were selected to participate in the experiment. The selected students were
separated into two groups of 40 students each. A t-test on the students’ pre-test grades
took place in order to examine if the knowledge of the two groups could be considered
as equivalent. According to the results of the pre-test statistical analysis with t-test, no
significant differences were found on the prior knowledge of each group since the p-
value equals to 0.858>0.05. Therefore, both groups were considered as equal as regards
to knowledge.
After that, the experiment date was set. In a one hour session, the first group
watched the SV whereas the second group watched the BV. After that, both groups were
asked to answer the post-test. Finally, a discussion took place between the researchers
Results
The target sample composed of 80 students with a mean age of 20.8 years, who
voluntary agreed to participate in this experiment. The students’ post-test grades were
collected. Table 1 displays the grades of students, comparing the average grade of
students who watch the BV, with the average post-test grade of those who watch the
SV. A t-test on the post-test grades with an alpha level at 0.05 was performed to
group students, in order to check research question. Before employing the t-test,
Levene’s homogeneity test was conducted. The result showed that the F value was equal
to 0.09 (p > .05). This indicated that the homogeneity test has not achieved statistical
significance; therefore, t-test could be applied. The t-test shows that the grades of those
who watch the BV were significantly higher than the grades of the students who watch
the SV. The difference is statistically significant when using t-test at the 95%
SVG
100
90
80
70
60
50
40
30
20
10
0
q1 q2 q3 q4 q5 q6 q7 q8 q9 q10 q11 q12 q13 q14 Mean
Figure 2 presents pre-test and post-test performance for each question (q1-q14)
for the BVG using a scale from 0 to 100. In addition, it presents the knowledge gained
as the difference between post-test and pre-test performance. Finally, figure 2 includes
the mean of the pre-test performance, the post-test performance as well as the mean of
the knowledge gained. Similarly, figure 3 presents students' results for the SVG.
Comparing the two figures, we can see that pre-test performance is similar for both
groups whereas, in the post-test, BVG dominates on SVG by an average of nearly 15
units. We also note that the knowledge gained in question 3 of figure 2 is quite low.
However, this is due to the high pre-test score which leaves a small window for
improvement. In contrast, in question 9 both the knowledge gained and the pre-test
knowledge is quite low. This is a sign that the video frames corresponding to question 9
As we have already presented, the knowledge gained by students who watched the BV
is higher than the knowledge gained by students who watched the SV. These results
confirm our preliminary evaluation (Author et al. 2016) which took place in October
2014 and show that the new balanced video influences students’ knowledge acquisition.
developers, we assume that the following factors were crucial in the aforementioned
result.
(1) The BV was developed following a particular methodology plus a set of design
(2) It is evident that the reformation of video that is provided in MVD is very
helpful to the improvement of video quality. This view was supported by the
general and specific learning objectives, which are then taken into consideration
(4) The balance between the input content and the produced video was evaluated to
In summary, the BV was accurate in its objectives and in direct connection with
them, and the original text while it achieved a more qualitative result than SV. In
this context, it was easier for students to watch and follow and thus learn better
from. Therefore, it seems that the proposed framework can help video producers
(1) The number of participants in the experiment was relatively low (80). Therefore,
it would be advisable that an experiment with more users be taken place in the
future.
(2) The experiment evaluated the subjects’ responses comparing one BV versus one
SV. Both videos had been constructed by the same group of producers. The
creators.
(3) Since all the experiment subjects were students from a specific university, their
proposed framework.
REFERENCES
Author 1, Author 2 & Author 3, (2016). Paper details are not displayed according to
journal submission rules.
Ali, N. M., Lee, H., & Smeaton, A. F. (2011).Use of content analysis tools for visual
interaction design. In Visual Informatics: Sustaining Research and
Innovations (pp. 74-84). Springer Berlin Heidelberg.
Arreola, R. A. (1998). Writing Learning Objectives. A teaching resource document
from the office of the vice chancellor for planning and academic support.
Retrieved from
http://nexus.hs-bremerhaven.de/Library.nsf/0946dbe6a3c341e8c12570860044165f/
3582b289612f6232c12573a2005aa4d8/$FILE/Learning_Objectives.pdf
Beale, R., &Sharples, M. (2002).Design guide for developers of educational
software. British Educational Communications and Technology Agency (Becta).
Brame, C. (2016). Effective educational videos.ACT teaching Guides, Center for
Teaching.Vanderbilt University. Retrieved from
https://cft.vanderbilt.edu/guides-sub-pages/effective-educational-videos/
Bravo, E., Amante, B., Simo, P., Enache, M., & Fernandez, V. (2011). Video as a new
teaching tool to increase student motivation.In Proceedings Global Engineering
Education Conference (EDUCON), 2011 IEEE (pp. 638-642).IEEE.
Bruner, J. S. (1963). The Process of education. New York: Vintage Books
Campbell, D. T., Stanley, J. C., & Gage, N. L. (1963). Experimental and quasi-
experimental designs for research. Boston: Houghton Mifflin.
Clark, R. C., & Mayer, R. E. (2011).E-Learning and the Science of Instruction: Proven
Guidelines for Consumers and Designers of Multimedia Learning (3rd ed.). San
Francisco, CA: John Wiley & Sons.
Cohen, L., Manion, L. & Morrison, K. (2003).Research Methods in Education (5th ed.).
London, UK: Routledge Falmer.
Cruse, E. (2007).Using educational video in the classroom: Theory, research and
practice. Wynnewood, PA: Library Video Company. Retrieved from
http://www.libraryvideo.com/articles/article26.asp
Denning, D. (1992). Video in theory and practice: Issues for classroom use and teacher
video evaluation. Retrieved from
http://www.ebiomedia.com/downloads/VidPM.pdf
Dick, W., Carey, L. & Carey J. (2005).The Systematic Design of Instruction (6th ed.).
Allyn& Bacon. pp. 1–12. ISBN 0-205-41274-2.
EURICON, (2008).Evaluation and utilization criteria of educational material. OEPEK,
Greece, Retrieved from http://repository.edulll.gr/edulll/retrieve/3460/1024.pdf (in
Greek)
Forest, E. (2014). The ADDIE Model: Instructional Design, Educational Technology.
Retrieved from http://educationaltechnology.net/the-addie-model-instructional-
design/
Guo, P., Kim, J, & Robin, R. (2014). How video production affects student engagement:
An empirical study of MOOC videos. ACM Conference on Learning at Scale
(L@S 2014).Retrieved from http://groups.csail.mit.edu/uid/other-pubs/las2014-
pguo-engagement.pdf.
Harrison, D. (2015). Assessing Experiences with Online Educational Videos:
Converting Multiple Constructed Responses to Quantifiable Data.The
International Review of Research in Open and Distance Learning, 16(1), 168-
192. Retrieved from
http://www.irrodl.org/index.php/irrodl/article/view/1998/3205
Hsieh H.-F. & Shannon S.E. (2005) Three Approaches to Qualitative Content Analysis.
In Qualitative Health Research, 15(9). Retrieved from
http://www.sagepub.com/millsandbirks/study/Journal%20Articles/Qual%20Health%20Res-
2005-Hsieh-1277-88.pdf
Kennedy, D. (2006). Writing and using learning outcomes: a practical guide. Cork:
University College Cork.
Kennedy, G., Petrovic, T., &Keppell, M. (1998). The development of multimedia
evaluation criteria and a program of evaluation for computer aided learning.
ASCILITE, 98, 407.
Koumi, J. (2006). Designing Video and Multimedia for Open and Flexible Learning.
London: Routledge
Lee, S. H., & Boling, E. (1999). Screen design guidelines for motivation in interactive
multimedia instruction: A survey and framework for designers. Educational
technology, 39(3), 19-26.
Leidner, D. E., &Jarvenpaa, S. L. (1995). The use of information technology to enhance
management school education: A theoretical view. MIS quarterly, 19(3), 265-
291.
Liao, W. C. (2012). Using short videos in teaching a social science subject: Values and
challenges. Journal of the NUS Teaching Academy, 2, 42-55.
Mayer, R. (2001). Multimedia learning. Cambridge: Cambridge University Press.
Mayer, R. (2008). Applying the science of learning: Evidence-based principles for the
design of multimedia instruction. Cognition and Instruction, 19, 177-213.
Meseguer-Martinez, A., Ros-Galvez, A., & Rosa-Garcia, A. (2017). Satisfaction with
online teaching videos: A quantitative approach. Innovations in Education and
Teaching International, 54(1), 62-67.
Munassar, N. M. A., &Govardhan, A. (2010).A comparison between five models of
software engineering. IJCSI, 5, 95-101.
Odle, T., & Mayer, R. (2009).Experimental Research. Education.com. Retrieved from
http://www.education.com/reference/article/experimental-research
Overbaugh, R. C. (1994). Research-based guidelines for computer-based instruction
development. Journal of Research on Computing in Education, 27(1), 29-47.
Peck, K. L., &Hannafin, M. J. (1988). The design, development & evaluation of
instructional software.Indianapolis, IN: Macmillan Publishing Co. Inc.
Plass, J. L., Homer, B. D., & Hayward, E. O. (2009). Design factors for educationally
effective animations and simulations. Journal of Computing in Higher
Education, 21(1), 31-61.
Ross, S. M., & Morrison, G. R. (2003).Experimental Research Methods. In D. Jonassen
(ed.), Handbook of research of educational communications and Technology
(2nd ed.)(pp. 1021- 1043). Mahwah, NJ: Lawrence Erlbaum Associates
Schwartz, D. L., & Hartman, K. (2007). It is not television anymore: Designing digital
video for learning and assessment. Video research in the learning sciences, 335-
348.
Shepard, R. N., & Cooper. L. A. (1982). Mental images and their transformations.
Cambridge, MA: MIT Press.
Simo, P., Fernandez, V., Algaba, I., Salan, N., Enache, M., Albareda-Sambola, M.,
&Rajadell, M. (2010). Video stream and teaching channels: quantitative analysis
of the use of low-cost educational videos on the web. Procedia-Social and
Behavioral Sciences, 2(2), 2937-2941.
Smaldino, S., Lowther, D., & Russel, J. (2012), Instructional Technology and Media for
Learning, 10th Edition, Pearson Education.
Whatley, J., & Ahmad, A. (2007). Using video to record summary lectures to aid
students' revision. Interdisciplinary Journal of E-Learning and Learning
Objects, 3(1), 185-196.
Zavlanos, M. (1998).Didactics (2nd ed.). “Hellin” Publications.