You are on page 1of 10

International Journal of Education and Pedagogy

eISSN: 2682-8464 | Vol. 2, No. 3, 112-121, 2020


http://myjms.moe.gov.my/index.php/ijeap

International Journal of Education and Pedagogy (IJEAP)


eISSN: 2682-8464
Journal website: http://myjms.moe.gov.my/index.php/ijeap

DEVELOPMENT AND VALIDATION OF AN AUTHENTIC-


BASED COMPETENCY ASSESSMENT RUBRIC FOR
SECONDARY SCHOOL MULTIMEDIA PRODUCTION
SUBJECT

Nithia. K1, Farrah Dina Yusop2* and Chua Yan Piaw3


123
Faculty of Education, University Malaya, Kuala Lumpur, MALAYSIA

*Corresponding author: farah@um.edu.my

Article Information: Abstract: The use of rubric to support students’


Article history:
competencies gradings is long being applauded in
vocational education. However, the current assessment
Received date : 13 August 2020 method in secondary school vocational education seems to
Revised date : 3 September 2020 be inauthentic, individualistic and outdated. The
Accepted date : 17 September 2020 assessment criteria are in a checklist form and far from
Published date : 26 September 2020
meeting the industrial’s needs. Students were assessed
To cite this document: based on their ability to perform a certain set of skills and
disregarding their affective skills such as communication,
Nithia, K., Yusop, F., & Chua, Y. presentation and working in team. A self-developed
(2020). DEVELOPMENT AND
VALIDATION OF AN AUTHENTIC-
authentic-based competency assessment rubric for a
BASED COMPETENCY vocational subject in secondary school was developed in
ASSESSMENT RUBRIC FOR lieu to the latter. The rubric instruments and criteria were
SECONDARY SCHOOL developed based on extensive literature reviews on
MULTIMEDIA PRODUCTION students’ skills evaluation in vocational education settings.
SUBJECT . International Journal Of
Education And Pedagogy, 2(3), 112-121.
Findings from the reviews have led to selection of
operational terms and formulation of rubric’s instruments.
The rubric is expected to measure students’ competencies
or skills in on four domains: Teamwork, Skills, Knowledge
and Presentation. Each of the domain contains five
assessment items with five level of ratings. There are three
stages involved for the development and validation of the
rubric. First, the development of instruments, second, the
content validation by subject matter experts and finally,
subject matter experts’ scores validation using Lawshe
(1975) method. Content Validity Ratio (CVR) and Content
Validity Index (CVI) were used to validate the instrument
and measuring its reliability. The overall analyses show
that the self-made authentic-based competency assessment

112
Copyright © 2020 ACADEMIA INDUSTRY NETWORKS. All rights reserved

Electronic copy available at: https://ssrn.com/abstract=3727521


International Journal of Education and Pedagogy
eISSN: 2682-8464 | Vol. 2, No. 3, 112-121, 2020
http://myjms.moe.gov.my/index.php/ijeap

instrument is reliable. The findings are important for


researchers, teachers, assessors and policy makers
especially taking rubric as an option for grading
competency assessment for secondary school’s vocational
education. There are possibilities into exploring the
assessment criteria into meeting the current industrial’s
needs and global standards.

Keywords: vocational education, competency assessment,


rubric.

1. Introduction
There’s a growing number of interests to the competency assessment in vocational education and
training in mid-80’s and its’ role in meeting employers and learner’s needs and expectation
(Clayton, Blom & Bateman, 2003). Therefore, it is also important for education sector to supply
quality workforce to meet employer’s demand and expectation and integrating employability skills
within the vocational curriculum in secondary technical schools. In the year 2002, Malaysia
Education Ministry has implemented 22 vocational subjects in selected secondary schools in
Malaysia. Multimedia Production (MP) is one of the subjects based on learning ICT technologies
and multimedia authoring software to develop multimedia product. Students’ achievements are
based on their assessed skills and academically measured in national exams. However, the current
assessment method in the subject focuses more on skills than competencies and lacking of 21st
century learning skills such as communication, problem solving and collaboration skills.
Workforce equipped with technical skills with well-acquired soft skills competencies such as
“competencies such as creative thinking, problem-solving, and analytical skills are much needed
by the employer in the industry to meet the challenges faced by businesses” (Ibrahim, Noorazlina,
& Rahim, 2020, p. 59). Addressing these setbacks, this study proposes the development of
authentic-based competency assessment rubric to complement the current assessment method with
integration of group-based and authentic tasks.

2. Literature Review
2.1 Authentic-based Competency Assessment in Vocational Subject
Authentic learning and assessment are two components that cannot be separated in achieving
educational goals (Karim, Abduh, Manda, & Yunus, 2018, p. 496)

Authentic-assessment requires the students to practice the similar skills, or both of attitudes, skills
and knowledge in similar job environment (Ariev, 2005; Gulikers, Bastiaens, & Kirschner, 2004).
As for vocational subject, the authentic task can also resemble the real work-place problem. The
question should be open-ended and intellectually challenging and easily accessible by learners
(Chin & Chia, 2004). The given set of authentic problem should not limit to pre-determination of
project’s outcome or broad as this would demotivate the students engaging in problem solving
(Cook, Buck & Rogers, 2012). Project-based learning (PBL) is the main learning feature in
technical and vocational; education and one of the key guiding students’ characteristics is “driving

113
Copyright © 2020 ACADEMIA INDUSTRY NETWORKS. All rights reserved

Electronic copy available at: https://ssrn.com/abstract=3727521


International Journal of Education and Pedagogy
eISSN: 2682-8464 | Vol. 2, No. 3, 112-121, 2020
http://myjms.moe.gov.my/index.php/ijeap

question” (Paris & Turner, 1994). PBL courses are the only multi-disciplinary that need complex
tasks based on challenging questions or problems that involves students into planning, problem
solving, decision-making and inquiry tasks. Being a part of project-based learning process, there
is a need to create scientific understanding on the concept and principles of the studied subjects or
area (Fallik, Eylon, & Rosenfeld, 2008). Therefore, there is need for a holistic assessment
mechanism in addressing and evaluating student’s competency in an authentic environment.

2.2 Formulation of an Authentic-based Competency Assessment Rubric


Criterion-based assessment is formally used to measure student’s ability in interpreting, analysing,
assessing and evaluating problems with valid opinions. There are few alternative assessment
methods that supports PBL goals such as portfolio assessment or peer assessment (Markkanen et
al., 2001; Yuvienco, 2010) and including the use of analytical rubrics (Trivedi et al., 2003). An
assessment that uses rubric involves meta-cognitive stages that promotes learning (Jonsson &
Svingby, 2007). Metacognitive processes involve student’s learning strategy or cognitive
engagement such as self-regulated learning and teamwork. In the current MP competency
assessment method, students were individually assessed by teacher and were given scores in a
checklist form of either “Competent” or “Incompetent”. They were assessed based on their
cognitive ability to perform certain sets of skills and proficiency in using software but not their
affective domain. This contradicts with project-based learning requirement whereby it involves
authentic tasks and several sets of skills that also includes teamwork. Based on these, an
assessment rubric that measures student’s performance and outcome during the project-based
activity was developed.

A criterion-referenced or rubric-based assessment is used to assess student’s performance based


on a set of criteria (Le Brun & Johnstone, 1994). Rubric is chosen as one of the assessments as it
describes the quality of expectations and also challenging students to do their best in the task and
at the same time knowing what is expected in a long term and complex project (Whittaker, Salend
& Duhaney, 2001). There are few assumptions made by Jonsson and Svingby (2007, p. 141)
regarding using rubric as part of authentic assessment. They are:
a) Reliability of a performance score can be improved with the use of rubric that is “analytic,
topic-specific, and complemented with exemplars and/or rater training”;
b) “rubrics do not facilitate valid judgment of performance assessments per se”;
c) “rubrics seem to have the potential of promoting learning and/or improve instruction”.

The above conclusions show that a rubric must be developed to be systematic, subject-specific and
with rating method. Even though rubric does not portray an effective decision of performance
assessment, the validity of a rubric rely on its validity done by expert raters. However, rubric is
seen an effective method of assessment for encouraging effective learning and creating meaningful
instructions. Furthermost educators and researchers have accepted that rubrics improve assessment
value (Jonsson & Svinby, 2007). This assessment method was chosen as to assess the secondary
school student’s skills, knowledge and attitude and making sure the quality of the curriculum
design, delivery and assessment is properly aligned at modular level. It also reports student’s
performance and their learning outcome achievement throughout their study period. Rubric
assessment method is seen as an emerging alternative assessment method by today’s researcher.
The rubric provides scores on every dimension and the total of the score will be summed up.

114
Copyright © 2020 ACADEMIA INDUSTRY NETWORKS. All rights reserved

Electronic copy available at: https://ssrn.com/abstract=3727521


International Journal of Education and Pedagogy
eISSN: 2682-8464 | Vol. 2, No. 3, 112-121, 2020
http://myjms.moe.gov.my/index.php/ijeap

Therefore, this has given an insight to develop an authentic-based competency rubric to assess
student’s performance based on three learning outcome domains: Attitude (A), Skills (S) and
Knowledge (K). The three domains are based on Lorin Anderson’s (2001) suggestions on the
design of learning domains (Kasilingam, Ramalingam & Chinnavan, 2014) and they are:
• Cognitive domain which refers to mental skills or knowledge (K);
• Affective domain which refers to progress in emotions or attitude (A); and
• Psychomotor is also known as physical skills (S).

Skills and knowledge domains criteria were designed based on Carver, Lehrer, Connell and In
Erickson (1992) categories of design skills, ideas were noticed when students work on a series of
multimedia products (Winter, 2010, p. 5) such as:
• Allocating resources and time to different segments of a project;
• Searching for information;
• Analysing and interpreting information;
• Developing representations of information;
• Developing a structure for a presentation;
• Catching and maintaining audience interest.

The above criteria show the processes involved in a multimedia product design and development.
Therefore, research’s authentic-based competency assessment rubric was developed based on the
current competency assessment capacity with inclusion of four domains such as teamwork, skills,
knowledge and presentation. Two extra domains such as teamwork and presentation were taken
into consideration in addressing student’s efforts in communicating and collaborating with their
team members and presenting their ideas to their peers in a respectful manner. Teamwork domain
was included as to assess student’s interaction with their peers during the course of completing
give projects. Presentation domain was included as to assess student’s pitching ability in promoting
their end-product in front of their clients and students.

3. Problem Statement
It is important to keep various type of documents as this shows the improvement of a learning
object and learning standards that are swiftly advancing and showing present thought. Therefore,
it is important to keep updating the teaching and learning documents in the form of syllabus and
its’ content to suit the current century’s (i.e. 21st century) of learning. Based on the MP curriculum
document analysis, the current assessment module doesn’t have either authentic nor has group-
based tasks, and lacking of project-based learning criteria. Therefore, there is a need into
integrating these elements into current assessment method and complemented with the use of
rubric. Therefore, this study developed an authentic-based competency assessment rubric to
complement the current skill-based assessment method for Multimedia Production subject in
secondary school. The study has proposed stages involved in rubric development and validation
process by subject matter experts using Lawshe Method (1975).

115
Copyright © 2020 ACADEMIA INDUSTRY NETWORKS. All rights reserved

Electronic copy available at: https://ssrn.com/abstract=3727521


International Journal of Education and Pedagogy
eISSN: 2682-8464 | Vol. 2, No. 3, 112-121, 2020
http://myjms.moe.gov.my/index.php/ijeap

4. Development of Authentic-based Competency Assessment Rubric


There are three stages involved in the development. First, formulation of rubric’s items and
criterions. Table 1 shows instrument’s structures that consisted four domains (Teamwork,
Knowledge, Skills and Presentations) and items. Second, the items were translated back-translation
method by two subject matter experts and finally, validation of the developed rubric by six subject
matter experts. A self-made rubric instrument consists of four domains with each having five
assessment criterions was developed based was developed to assess student’s competency in given
group-based authentic tasks. The authentic tasks were validated by three subject matters experts
the subject.

4.1 First stage: Rubric Instruments’ Translation


The instruments was translated by two experts using back-translation (Son, 2018) method. It is
translated from English language to Bahasa Melayu and from Bahasa Melayu to English language
(Habibi, Yusop, & Razak, 2020). The experts are two secondary school’s English teachers with
teaching experience more than 20 years.
Table 1: Assessment Domains, Performance Criteria and Number of Items
Assessment Domains / Performance Assessment Criteria Number
Assessment Scope (Student is able to…) of Items
Teamwork T1: Work with other team members cooperatively. 5
Assess student’ T2: Interact with other team members with respect.
observed attitudes and T3: Value other team member’s suggestions.
behaviour with their
peers during the T4: Submit the project within the timeline.
project. T5: Motivate team members by valuing all members’ ideas and contributions.
Skills S1: Analyze the need of the problem and relating it with learnt skills. 5
Students skills in using S2: Suggest possible solution (s) to the problem creatively based on the learnt
software and other skills.
related resources that
leads to the
S3: Select and use appropriate software tools in designing and creating the
development of the product.
final product. S4: Use different methods during the process to design and develop multimedia
product such as storyboard, videography, photography, audio production and
designing.
S5: Actively using online site for communicating and sharing information
among group members.
Knowledge K1: Apply multimedia principles into the design of the product. 5
Assess students’ K2: Show at least two initial solutions that leads to final design of the product.
knowledge will be K3: Justify at least two authoring software used in the project.
assessed based on their
resources and K4: Gather information regarding the given scenario at least from 3 resources.
researches done that
led to the completion K5: Explain with the reason behind the proposed solution for example, choice
of the project. [3] of colors, type of texts, graphic design, choice of background music and video.

Presentation P1: Use different types of presentation method to present ideas and product. 5
Assess students’ end- P2: Present end-product with clarity and confidence in front of peers.
product pitching and P3: Efficiently document information, resources and storyboards during the
presentation styles.
process of design and developing end-product using any method of electronic
presentation.
P4: Relate the final product with client’s needs.
P5: Show the sense of belonging and responsibility towards client’s needs and
product.

116
Copyright © 2020 ACADEMIA INDUSTRY NETWORKS. All rights reserved

Electronic copy available at: https://ssrn.com/abstract=3727521


International Journal of Education and Pedagogy
eISSN: 2682-8464 | Vol. 2, No. 3, 112-121, 2020
http://myjms.moe.gov.my/index.php/ijeap

4.2 Second Stage: Rubric’s Content Validation


4.2.1 Data Collection
Data collection was done after approval from Malaysia’s Ministry of Education. An online content
evaluation site was developed for the panels to review and evaluate the instrument. The link for
the site was sent to them via their email. The validation method involves experts judging and rating
items based on one of these three categories: “essential”, “useful, but not essential” or “not necessary”’
(Lawshe, 1975; Ayre & Scally, 2014). They are also encouraged to suggest any needed amendments
or corrections to the item(s). The data was compiled in an electronic spreadsheet for content
validation analysis. Initially, twelve experts were chosen based on their expertise in teaching and
evaluation on students’ competency assessment in Malaysia. They are secondary schools’ teachers,
polytechnics lecturers and a senior university lecturer in Malaysia. However, only six of them have
submitted their responses. Polit and Beck (2006) have suggested that at least five to ten experts’
raters are adequate to evaluate the items and it is relevant to the study’s population. Table 2 shows
six subject matter experts participated in the content evaluation procedure. They are from schools,
polytechnic and university and having experience in teaching and assessing the similar area of
subject.

Table 2: Content Evaluation Panels


Years of
Gender Institution/ School Designation Field of Expertise
Experience
Instructional Design and
Female Public University Lecturer 10
Technology
Government Electrical, Electronic &
Male Lecturer 10
Polytechnic Computer Engineering
Government Computer Engineering /
Female Polytechnic Lecturer Science, Math, 7
Computer
Civil Engineering
Government
Female Lecturer &Technical and 10
Polytechnic
vocational education.
Teacher / Phd
Primary School Student /Key
Male /Pusat Permata Pintar Instructor for Computer Education 19
Negara Multimedia
Creative (2015)
IPG Kampus
Female Lecturer Computer Science 12
Pendidikan Teknik

4.2.2 Data Analysis and Findings


(a) Content Validation: Lawshe Method (1975)
Lawshe’s method is one of the ways to establish the content’s validity of instruments by panel of
subject matter experts. The method was introduced in 1975 and is widely used in many research
areas including education. Content Validity Ratio (CVR) and Content Validity Index (CVI) were
used to calculate the items validity and relevancy. CVR is used in this research as a statistical
technique to determine the validity of individual items in the instruments as rated by a panel of
experts (Gilbert & Prion, 2016). In addition, CVI provides the validity of overall instruments based

117
Copyright © 2020 ACADEMIA INDUSTRY NETWORKS. All rights reserved

Electronic copy available at: https://ssrn.com/abstract=3727521


International Journal of Education and Pedagogy
eISSN: 2682-8464 | Vol. 2, No. 3, 112-121, 2020
http://myjms.moe.gov.my/index.php/ijeap

on the means of all of the CVR items. Both CVR and CVI are quantitative measures that validate
research’s evaluation items and overall rubric. The item(s) is considered crucial when panel of
expert’s agreement with CVR value is more than zero (0). In this research, the rubric’s content
validity is done in two stages:
a) First Stage: Calculating CVR for Each of the items to eliminate items which bears CVR
value below “0” and;
b) Second Stage: Calculating CVI to determine the validity of the whole set of items after
deletion of items during the first stage calculation.

First Stage: Calculating Content Validity Ratio (CVR) for Each of the Items
Wilson, Pan and Schumsky (2012) stated that CVR an internationally recognized method for
establishing content validity for statistical items or instruments whether the items are accepted or
rejected. The rubric items are retained when it is deemed “essential” by the expert and rejected if
it is “not necessary”. The CVR formula is “CVR= (Ne - N/2)/(N/2)” (Lawshe, 1975), where Ne is
the number of experts indicating "Essential" and N is the total number of experts. The CVR value
for each item was calculated in a single column using electronic spreadsheet. Table 3 shows CVR
value for each item calculated based on the total number of experts that select “Essential” (Ne) for
each Item.

Table 3: CVR Calculation for Each Item and CVI Value (n=20)
Item NUMBER OF EXPERTS SELECT “Essential” (Ne)
Total (Ne) CVR
(n=20) Expert1 Expert 2 Expert 3 Expert 4 Expert 5 Expert 6
T1 / / / / / / 6 1.00
T2 / / / / / 5 0.67
T3 / / / / 4 0.33
T4 / / / / / / 6 1.00
T5 / / / 3 0.00
S1 / / / / / / 6 1.00
S2 / / / / / / 6 1.00
S3 / / / / / / 6 1.00
S4 / / / / 4 0.33
S5 / / / / / / 2 1.00
K1 / / / / / 5 0.67
K2 / / / / / 5 0.67
K3 / / / 3 0.00
K4 / / / / 4 0.33
K5 / / / / 4 0.33
P1 / / / / / 5 0.67
P2 / / / / / / 6 1.00
P3 / / / / / / 6 1.00
P4 / / / / / / 6 1.00
P5 / / / / / / 6 1.00
CVR (critical) for the panel size (N) of 6 is 1 [1,5] CVI 0.70
Content Validity Ratio (CVR); Content Validity Index (CVI) value is overall CVR mean value.

Content evaluation agreement by six experts needs a minimum CVR critical value of 0.99 to 1 to
satisfy five percent level significance requirement (Ayre & Scally, 2014; Lawshe, 1975).
Therefore, items that has CVR value below than 0.99 is considered for deletion. Table 4 shows ten
items with CVR value lower than 0.99 value.

118
Copyright © 2020 ACADEMIA INDUSTRY NETWORKS. All rights reserved

Electronic copy available at: https://ssrn.com/abstract=3727521


International Journal of Education and Pedagogy
eISSN: 2682-8464 | Vol. 2, No. 3, 112-121, 2020
http://myjms.moe.gov.my/index.php/ijeap

Table 4: List of Items with CVR Value Less Than 0.99 (n=10)
Item NUMBER OF EXPERTS SELECT “Essential” (Ne)
Total (Ne) CVR
(n=10) Expert1 Expert 2 Expert 3 Expert 4 Expert 5 Expert 6
T2 / / / / / 5 0.67
T3 / / / / 4 0.33
T5 / / / 3 0.00
S4 / / / / 4 0.33
K1 / / / / / 5 0.67
K2 / / / / / 5 0.67
K3 / / / 3 0.00
K4 / / / / 4 0.33
K5 / / / / 4 0.33
P1 / / / / / 5 0.67

The deletion of items is crucial for the next step of instrument validation. However, all of the items
were retained because the total mean value of CVR is 0.70 without deletion. The value, which
represents CVI value shows an acceptable value of the whole set of items (Tilden, Nelson & May,
1990). However, if the above items (refer Table 4) are deleted, the CVI value increased to 1.00.

Second Stage: Calculating Content Validity Index (CVI)


Content Validity Index (CVI) is the mean value of the overall CVR value for all items (Gilbert &
Prion, 2016; Lawshe, 1975) after the deletion of items with CVR value less than 0.99. Based on
Table 4, the instruments CVI value without deletion is 0.70. The value is accepted based on Tilden,
Nelson and May’s (1990) recommendation on the CVI value to exceed 0.70. Table 5 shows the
overall computation of CVI for each of the panels. The percentage of consensus between the
experts on the relevancy of the instruments shows an acceptable percentage of 85%.

Table 5: Content Validity Index of Overall Instrument


Useful but CVI%
Not Total
Experts Essential Not (Essential divide by total
Relevant questions
Essential number of items)
One 20 0 0 20 100
Two 14 5 1 20 70
Three 14 6 0 20 70
Four 15 5 1 20 75
Five 20 0 0 20 100
Six 19 1 0 20 95
Overall CVI% Mean = 85%, CVI = Content Validity Index

119
Copyright © 2020 ACADEMIA INDUSTRY NETWORKS. All rights reserved

Electronic copy available at: https://ssrn.com/abstract=3727521


International Journal of Education and Pedagogy
eISSN: 2682-8464 | Vol. 2, No. 3, 112-121, 2020
http://myjms.moe.gov.my/index.php/ijeap

5. Conclusion
The rubric was developed for a vocational subject to complement the current assessment method.
However, it must be accompanied with the use of authentic-tasks and group-based activity to
enable the use of such rubric as an evaluation form. The minimum requirement from the content
validation analysis is adequate to acquire before its implementation. Pilot testing can be done to
further establish the validity of the measuring instruments. Furthermore, the rubric development
stages involved serves as a guideline for assessors and researchers especially in social science or
in education field to develop and validate a self-made instrument for the use of assessing students’
product-based evidence. It also provides a cross-reference for researchers from education to adapt
content validation measures such as CVR and CVI from Lawshe Method in validating instruments
whereby the method is widely used in medical practices for development of skill-based assessment
instrument. Researchers from the same interest or sample features may use or adapt this dataset
for more comparisons and evaluation on its feasibility under different sets of learning environment.

6. Acknowledgement
We would like to extend our gratitude to all subject matter experts, students, assessors and research
supervisors that have involved in this research. This research is funded by university grant.

References
Ariev, P.R. (2005). A theoretical model for the authentic assessment of teaching. Practical Assessment,
Research & Evaluation, 10(2), 1-11.
Ayre, C., & Scally, A. J. (2014). Critical values for Lawshe’s content validity ratio: Revisiting the original
methods of calculation. Measurement and Evaluation in Counseling and Development, 47(1), 79–86.
https://doi.org/10.1177/0748175613513808
Carver, S. M., Lehrer, R., Connell, T., & Erickson, J. (1992). Learning by hypermedia design: Issues of
assessment and implementation. Educational Psychologist, 27(3), 385-404.
Chin, C., & Chia, L.-G. (2004). Implementing Project Work in Biology through Problem-Based Learning.
Journal of Biological Education, 38(2), 69-75.
Clayton, B., Blom, K., Meyers, D., & Bateman, A. (2003). Assessing and certifying generic skills. National
Centre for Vocational Education Research, 252.
Cook, K., Buck, G., & Park Rogers, M. (2012). Preparing Biology Teachers to Teach Evolution in a Project-
Based Approach. Science Educator, 21(2), 18-30.
Fallik, O., Eylon, B. S., & Rosenfeld, S. (2008). Motivating teachers to enact free-choice project-based
learning in science and technology (PBLSAT): Effects of a professional development model. Journal
of Science Teacher Education, 19(6), 565-591.
Gilbert, G. E., & Prion, S. (2016). Making Sense of Methods and Measurement: Lawshe’s Content Validity
Index. Clinical Simulation in Nursing, 12(12), 530–531. https://doi.org/10.1016/j.ecns.2016.08.002
Gulikers, J. T., Bastiaens, T. J., & Kirschner, P. A. (2004). A five-dimensional framework for authentic
assessment. Educational technology research and development, 52(3), 67.
Habibi, A., Yusop, F. D., & Razak, R. A. (2020). The dataset for validation of factors affecting pre-service
teachers’ use of ICT during teaching practices: Indonesian context. Data in Brief, 28.
https://doi.org/10.1016/j.dib.2019.104875
Ibrahim, N. H., Noorazlina, I., & Rahim, A. (2020). Student perception toward mooc in study skills.
International Journal of Education and Pedagogy, 2(2), 58–65.
Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational
consequences. Educational research review, 2(2), 130-144.

120
Copyright © 2020 ACADEMIA INDUSTRY NETWORKS. All rights reserved

Electronic copy available at: https://ssrn.com/abstract=3727521


International Journal of Education and Pedagogy
eISSN: 2682-8464 | Vol. 2, No. 3, 112-121, 2020
http://myjms.moe.gov.my/index.php/ijeap

Karim, A. A., Abduh, A., Manda, D., & Yunus, M. (2018). The effectivity of authentic assessment based
character education evaluation model. TEM Journal, 7(3), 495–500. https://doi.org/ 10.18421/TEM73-
04
Krajcik, J., McNeill, K. L., & Reiser, B. J. (2008). Learning‐goals‐driven design model: Developing
curriculum materials that align with national standards and incorporate project‐based pedagogy.
Science Education, 92(1), 1-32.
Lawshe, C. H. (1975). A quantitative approach to content validity. Personnel Psychology, 28(4), 563–575.
https://doi.org/10.1111/j.1744-6570.1975.tb01393.x
Le Brun, M. J., & Johnstone, R. (1994). The quiet (r) evolution: Improving student learning in law. Law
Book Co.
Markkanen, H., Ponta, D. & Donzellini, G. (2001). NetPro: methodologies and tools for project-based
learning in internet, in Montgomerie, C. and Viteli, J. (Eds.): Proceedings of World Conference on
Educational Multimedia, Hypermedia and Telecommunications 2001, pp.1230–1235, AACE,
Chesapeake, VA.
Mulder, M., Weigel, T., & Collins, K. (2007). The concept of competence in the development of vocational
education and training in selected EU member states: a critical analysis. Journal of Vocational
Education & Training, 59(1), 67-88.
Paris, S. G., & Turner, J. C. (1994). Situated motivation. Student motivation, cognition, and learning:
Essays in honor of Wilbert J. McKeachie, 213-237.
Polit, D. F., & Beck, C. T. (2006). The content validity index: are you sure you know what's being reported?
Critique and recommendations. Research in nursing & health, 29(5), 489-497.
Preuss, D. A. (2002). Creating a project-based curriculum, Tech Directions, 62(3), 16– 19
Ramalingam, M., Kasilingam, G., & Chinnavan, E. (2014). Assessment of learning domains to improve
student's learning in higher education. Journal of Young Pharmacists, 6(1), 27.
Retnawati, H., Hadi, S., & Nugraha, A. C. (2016). Vocational High School Teachers' Difficulties in
Implementing the Assessment in Curriculum 2013 in Yogyakarta Province of Indonesia. International
Journal of Instruction, 9(1), 33-48.
Son, J. (2018). Back translation as a documentation tool. Translation and Interpreting, 10(2), 89–100.
https://doi.org/10.12807/ti.110202.2018.a07
Tilden, V. P., Nelson, C. A., & May, B. A. (1990). Use of qualitative methods to enhance content validity.
Nursing Research, 39, 172–175.
Thomas, J. W. (2000). A review of research on project-based learning. Buck Institute of Education.
Retrieved from http://www.newtechnetwork.org.590elmp01.blackmesh.com/sites/default/f
iles/dr/pblresearch2.pdf
Wilson, F. R., Pan, W., & Schumsky, D. A. (2012). Recalculation of the critical values for Lawshe’s content
validity ratio. Measurement and evaluation in counseling and development, 45(3), 197-210.
Whittaker, C. R., Salend, S. J., & Duhaney, D. (2001). Creating instructional rubrics for inclusive
classrooms. Teaching Exceptional Children, 34(2), 8-13.
Winter, J. (2010). Educative assessment for/of teacher competency. Assessment in Education, 17, Retrieved
from http://proquest.umi.com/pqdweb?did=1979837691&
Fmt=7&clientId=11263&RQT=309 &VName=PQD
Yuvienco, J. (2010). Evaluating peer assessment within project-based learning in second/foreign language
education. In Global Learn (pp. 441-447). Association for the Advancement of Computing in
Education (AACE).

121
Copyright © 2020 ACADEMIA INDUSTRY NETWORKS. All rights reserved

Electronic copy available at: https://ssrn.com/abstract=3727521

You might also like