You are on page 1of 14

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/327231273

Comparison between continuous assessment and final score

Conference Paper · February 2015

CITATION READS
1 14,470

1 author:

Azzah Al-Maskari
Ibra College of Technology
26 PUBLICATIONS 353 CITATIONS

SEE PROFILE

All content following this page was uploaded by Azzah Al-Maskari on 26 August 2018.

The user has requested enhancement of the downloaded file.


OQNHE Conference 2015, Muscat, 24-25 February
Quality Management & Enhancement in Higher Education

Comparison between Continuous Assessment and Final Examination Scores

Azzah Al-Maskari
Ibra College of Technology, Sultanate of Oman.
E-mail: almaskari.a@ict.edu.om

Abstract
This paper aims to study the impact of continuous assessments (CA) on the final examinations‟
marks and the correlation between CA marks and final marks. The analysis included results of
4,026 students, it was found that while only 4% of students failed the CA, 26% failed the
midterm exam, 22% failed the final and 23% failed the level. Thus, a large ratio of students
passed CA but failed in the final exam and the course. No strong correlation was found between
the CA marks and the final examination marks. Consequently, CA does not give practically the
same pass/fail rates as the midterm and final exams.
The weak correlation between the continuous assessment and the final marks reported in this
study indicates that maybe assessment criteria are not set well. The high success rate in the CA
could also indicate inflation in marks and this gives the students inaccurate indication of their
performance. This make students feel overconfident and may not prepare themselves
adequately for the exams.
This study helps educators to decide on the best weightings to be used for continuous assessment
and exams. Moreover, it is important to enhance the quality of teaching and learning and making
sure that students are undergoing deep learning in order to build their competencies to meet the
demands of the 21st century. Therefore, this study concluded with a recommendation of
increasing the CA weight and decreasing the final exams weight. That is because the traditional
examination measures cognitive aspects of learning while continuous assessment promotes
learners' skills, knowledge, attitudes and values.

Keywords: assessment, continuous assessment, examinations, education, foundation

1
OQNHE Conference 2015, Muscat, 24-25 February
Quality Management & Enhancement in Higher Education

Introduction
Assessment is an important component of any teaching and learning process. It is the most
influential factor in shaping what and how students learn (Pintrich, Marx, & Boyle, 1993;
Walvoord, 2004). Students do not base their decisions regarding approaches to learning on how
they are taught but rather on how they will be graded(Boud, 1990). Ramsden (1992, p. 187)
stated that assessment “defines the curriculum”; thus, assessment strongly influences approaches
to learning and it drives learning. Gibbs (1992) said that students‟ engagement with deep or
surface learning depends on the assessment strategies followed. Thus, assessment methods
influence and frame the nature of learning (Schrag, 1992; Gibbs, 2006). The assessment of
learning provides objective evidences necessary in the decision-making process in education for
both the students and the lecturers. Students become aware of their strength and areas that need
improvement in the subject assessed. Educators can also improve their teaching strategies.
Ndebele & Maphosa (2013) asserted that teaching and learning should be enhanced by well-
planned assessment strategy that is properly aligned with intended learning outcomes and
learning activities. Ramsden (1992) stated that assessment of learning is used to tell us whether
or not the learning has been successful and assessment for learning is used to convey to students
what we want them to learn. Stiggins (2002) highlighted the distinction between assessment
which meant to determine the status of learning (assessment of learning) and assessment that
promote greater learning (assessment for learning).Her further explains that both assessments of
and for learning are important and should have a balance of both assessments. Brown, Bull and
Pendlebury (1997: 6)explained assessment as follow.

Assessment defines for students what is important, what counts, how they will spend their time
and how they will see themselves as learners. If you want to change student learning, then
change the methods of assessment.

Thus, it is curial for any educational institute to measure students‟ achievement of active learning
and growth through proper assessment. To the author‟s knowledge, there is no published work
which studied the impact of continuous assessments on the final marks of students. Therefore,
this paper aims to study the impact of continuous assessments (CA) on the final examinations‟
marks and the correlation between CA marks and final marks in the foundation program at the
Colleges of Technology. Results from this study are helpful in enhancing the quality of teaching

2
OQNHE Conference 2015, Muscat, 24-25 February
Quality Management & Enhancement in Higher Education

and learning process, making sure that students are placed at the center of the
learning process and students are undergoing deep learning in order to prepare them to meet the
demands of the 21st century. This will also help the college achieve its mission in delivering
student-centered education that produces competitive graduates who enter the labor market with
confidence, strong technological and personal skills, prepared for a life of contribution and
success.

Types of assessment

There are two types of assessments: summative and formative. Generally, assessment of learning
is referred to as summative assessment while assessment for learning is referred to as formative
assessment. Summative assessment is examination or tests which are given in a particular point
in time for the grading process. However, summative assessments are usually done at specific
point in time, e.g. end of semester, when it is too late for instructional adjustments for the
learning process. Combrinck & Hatch (2012) have criticized summative assessment for being
context-independent and inflexible.
On the contrary, formative assessment comes in the form of continuous assessment (CA) and its
result can be used to make adjustment to teaching and learning. CA is more likely to be process-
oriented, informal, internal, involved learner (McGonigal, 2006). There are many types of
continuous assessment such as essays, presentation and class participation, projects/term papers
and practical work (e.g. laboratory work, fieldwork, drawing practice). CA acts as a supplement
to traditional exams and tests, offering a methodology for measuring students‟ performance.
Similarly, Alausa (2006) regarded continuous assessment as guidance-oriented because It gathers
data about the teaching/learning over a period of time and helps modify instruction. According to
Alausa(2006:2), “this could play a vital role in diagnosing and mediating areas of learners‟
weakness if properly anchored in what occurs in classrooms”.CA is an approach that captures the
full range of learners‟ performance. Thus, educators and administrators are able to assess
learners‟ progress and would have time to correct the problems encountered by the students and
making adjustment to the teaching and learning process.
Combrinck & Hatch (2012) found CA assisted students to manage their workload better and
improved their understanding of the subject content for 1000 second-year macro-economics
students in the Faculty of Management Studies at the University of KwaZulu-Natal in

3
OQNHE Conference 2015, Muscat, 24-25 February
Quality Management & Enhancement in Higher Education

South Africa. Everson (2010) found that CA is the only form of assessment that
is both systematic and effective at monitoring students‟ skills and understanding. CA places the
student at the center of the learning process, which helps the student to develop self-regulated
learning skills. Lopez et al. (2007), Coll et al. (2007) and Ariasian (1991) listedthree
characteristics of CA: it provides information on assessment that can be fed into the planning of
the next round of assessment; it can use a variety of assessment types; and it provides constant
feedback on students‟ progress which enhances continuous learning. Birenbaum et al. (2010)
maintained that an integrated assessment system (that includes CA) allows students to test
themselves and review their own progress. Regular assessment can also help to motivate and
assist students to become more independent learners. Rushton (2005) stated that when frequent
assessment is combined with regular feedback it will improve students‟ learning.
Trotter(2006)concluded that while continuous assessment may be time-consuming to administer,
the rewards of an enhanced learning environment for students outweigh the additional burden on
staff. However, Combrinck & Hatch (2012) highlighted one of the difficulties when using CA
for large classes. This is mainly because it is harder for lecturers to use formative assessment
when the students‟ number is high, as it is difficult to provide effective tutoring and feedback

Literature review
Many studies have been conducted to examine the correlation between the continuous
assessment scores and examinations scores. For example, Oyedeji (1994) studied the correlation
between the continuous assessment scores and examinations score for 300 secondary students in
Nigeria for five different subjects (Mathematics, English Language, Integrated Science, Social
Studies and Introductory Technology). Oyedeji found that continuous assessment scores was a
good predictor for examination scores. De Sande et al (2008) found very high correlation
between CA and examination scores for 210 Engineering students in „Signals and Systems‟
module. Moreover,Daniel & Island, 2012; Emmanuel& Clement,2012; and Aina& Adedo (2013)
found strong correlation between examination scores and continuous assessment scores in
different subjects. Ghani, Said, Ibrahim and Ibrahim (2005) found a high correlation between
continuous assessment scores and examinations marks for 74 accounting students on sixteen
different modules in University of Technology Malaysia. They observed that CA and exams
scores were highly correlated. Pudaruth et al (2013) studied the impact of continuous
assessments on the final marks of Computer Science modules at the University of Mauritius.

4
OQNHE Conference 2015, Muscat, 24-25 February
Quality Management & Enhancement in Higher Education

They have analyzed the data for 727 students on 14 modules and found a strong
correlation between the course work marks and the final examination marks. Aina& Adedo (2013)
found a strong correlation between continuous assessment and students‟ performance in physics
for 92 students from Kwara State College of Education in Nigeria. They recommended that final
grade for physics should not be based on examination scores only but should be combination
between examination scores and continuous assessment.
However, there are some studies found that students perform better in continuous assessments
than final exams. For example, Dery and Addy-Lamptey (2009) found that students did better in
continuous assessments than final exams in English Language and Mathematics for 6,125 senior
high school students in Ghana. Pérez-Martínez et al (2009) found that students obtained higher
score in the continuous assessment than those following the traditional final assessment for four
Engineering courses at the Universidad Politécnica de Madrid in Spain. The percentage of
passing rate has increased from 55.6% to 85.3% when the continuous assessment was applied
and the average grades also increased by 1.62 points. Furthermore, Iyabode (2012) examined
the way CA and examination results compare in English class for 80 students. It was found
that students scored slightly better in the CA than in the examination.
Evidence showed that, as discussed above, CA has many advantages and if used properly with
good assessment criteria and frequent feedback can assist students to understand their work
better through a deeper approach to learning. It is noticed that all the studies reported above has
taken place in different countries such as Spain, Malaysia, Ghana, Nigeria, Mauritius, France.
However, the author is not able t locate published work on the Middle East, the Gulf or in Oman
that discuss the topic of continuous assessment and the final marks of the students.

Methodology
Procedure
This study contained analysis for students studying English language in the English Language
Center in Ibra College of Technology for the year 2014. As shown in Table 1, there were 1501
students enrolled in the foundation program in Fall semester, 1444 in Spring semester and 1081
in Summer semester with total of 4,026 for the full year. There are four levels in the foundation
program as shown in this table (level 4 is the highest or advanced level and level 1 is the lowest
level). This analysis excluded data for students who did not sit for the midterm and final exams.

5
OQNHE Conference 2015, Muscat, 24-25 February
Quality Management & Enhancement in Higher Education

Table 1: Number of students studied in the foundation in 2014


Level (L4) Level (L3) Level2(L2) Level(L1) Total
Fall 465 379 459 198 1501
Spring 352 499 398 195 1444
Summer 495 389 197 no new students 1081
Total 1312 1267 1054 393 4026

Type of Assessment Followed In the Center


There are three types of assessment followed in the English Language Center: 20 marks for the
continuous assessment (CA), 30 marks for the Midterm (MT) and 50 marks for the final exam.
The weight of marks distribution is equally distributed for each level for the three different
assessments. There are different skills taught at these levels (grammar, listening, reading,
scanning, speaking, and writing).Table 2 shows the 20 marks distribution for the Continuous
assessment for each level.

Table 2: Marks distribution for the continuous assessment for each level
Continuous Assessment L1 L2 L3 L4
Class Quizzes 5 5 5 5
class participation 2 2 0 0
Speaking test 5 5 5 5
Project & Presentation 0 0 0 10
Reading/writing assignments 2.5 2.5 2.5 0
Note Taking 0 0 2 0
Portfolio 2.5 2.5 2.5 0
Vocab log 3 3 3 0
Total 20 20 20 20

Results analysis

Students Performance
Tables 3 to 5contain students‟ performance for the three semesters measured using the average
(avg) and the pass rate for each level. Table 6 contains a summary of these tables. The results
show that students obtained the highest pass rate in continuous assessment and the lowest pass
rate in midterms for all the levels in the three semesters expect for level one in Fall and levels
three and two in Summer where the pass rate for the midterm is higher than the final.
It is showing in the summary Table (6) that the pass rate for the final exam and the percentage of
students who pass the levels are nearly the same. Moreover, fewer students passed the level 4
(62%) as compared to level 3 (72%), level 2 (89%), and level 1 (85%). However, although 85%
passed the level 1, they have only obtained 58.21 out of 100 marks. It is important to address the

6
OQNHE Conference 2015, Muscat, 24-25 February
Quality Management & Enhancement in Higher Education

reason for the low pass rate for the L4. Unlike the case for other levels, the final
exam for level 4 is centralized among 7 colleges and the final questions are not written by the
lecturers from the center but rather they ready come from outside.

Table 3: Average and pass rate for Fall 2014


L4 L3 L2 L1 avg

Fall 2014 Avg Pass rate Avg Pass rate Avg Pass rate Avg Pass rate Avg Pass rate

Pass CA (marks=20) 16.01 100% 14.57 93% 15.98 98% 12.72 94% 14.82 96%

Pass MT (marks=30) 14.63 46% 14.51 40% 22.51 89% 18.64 93% 17.57 67%

Pass Final (marks=50) 26.18 62% 30.47 76% 37.19 93% 26.77 83% 30.15 79%

passed the level (marks=100) 56.89 61% 59.58 71% 75.71 92% 40.71 82% 58.22 77%

Table 4: Average and pass rate for Spring 2014


L4 L3 L2 L1 avg
Spring 2014 Avg Pass Avg Pass Avg Pass Avg Pass Avg Pass
rate rate rate rate rate
Pass CA (marks=20) 15.79 100% 16.39 98% 15.76 98% 14.46 94% 15.60 98%
Pass MT (marks=30) 13.20 33% 18.93 80% 20.69 89% 21.51 88% 18.58 73%
Pass Final (marks=50) 26.16 58% 34.39 87% 34.60 90% 33.11 88% 32.07 81%
passed the level 55.21 56% 69.76 87% 71.13 89% 69.12 86% 66.31 80%
(marks=100)

Table 5: Average and pass rate for Summer 2014


L4 L3 L2 avg
Summer 14 Avg Pass Avg Pass Avg Pass Avg Pass
rate rate rate rate
15.85 97% 15.30 96% 15.50 95% 15.55 96%
Pass CA (marks=20)
16.88 68% 16.95 72% 19.87 86% 17.90 75%
Pass MT (marks=30)
27.75 69% 24.82 57% 31.32 82% 27.97 69%
Pass Final (marks=50)
passed the level 60.56 77% 57.14 57% 66.74 82% 61.48 72%
(marks=100)

Table 6: Summary of average and pass rate for three semesters for different levels
L4 L3 L2 L1 Average
Avg Pass Avg Pass Avg Pass Avg Pass Avg Pass
rate rate rate rate rate
15.88 99% 15.42 95% 15.75 97% 13.59 94% 14.96 96%
Pass CA (marks=20)
14.90 51% 16.80 65% 21.02 88% 20.07 90% 17.26 74%
Pass MT (marks=30)
26.70 63% 29.89 74% 34.37 90% 29.94 85% 28.84 78%
Pass Final (marks=50)
passed the level 57.55 62% 62.16 72% 71.19 89% 54.92 85% 58.21 77%
(marks=100)

7
OQNHE Conference 2015, Muscat, 24-25 February
Quality Management & Enhancement in Higher Education

Looking at the average column in the summary table, it is noticed that only 4%
of students failed the CA, 26% failed the midterm exam, 22% failed the final and thus 23%
failed the level. Therefore, a large ratio of students passed CA but failed in the final exam and
hence failed the level.

Correlations between Different Assessment Methods

Tables7 to 9 show the Pearson‟ correlations between the different types of assessment used in the
foundation program for Fall, Spring and Summer semesters. Correlation is considered significant
where p<0.05. In these tables Agg refers to the aggregated scores students obtained at the end of
each semester (20 marks from continuous assessment (CA), 30 marks form midterm (MT), 50
marks from final exam).
The first three rows in each table display the correlations between the CA, MT, final exam and
the aggregated results (Agg). It is clearly showing that the Final exam has the strongest
correlations with the aggregated score obtained (r=0.95 ) at the end of each semester while the
**

CA has the lowest correlations (r=0.75 ) for all the levels for the three semesters. In these tables
**

it is also showing that CA does not correlate strongly with both midterms and final exams.
Moreover, the correlation between the midterm and the final exam are better than the correlation
between midterm and CA. That is because the nature of the assessment given in the midterm is
similar to those given in the final exams as compared to those given in the CA.
Table 7: correlations between different Table 8: correlations between different assessment
assessment (Fall) (Spring)
Fall (N=1501) L4 L3 L2 L1 Spring L4 L3 L2 L1
(N=465) (N=379) (N=459) (N=198) (N=1444) (N=352) (N=499) (N=398) (N=195)

CA vs. Agg 0.76** 0.72** .77** 0.82** CA vs. Agg 0.71** 0.77** 0.77** 0.78**
MT vs. Agg 0.90** 0.93** .92** 0.95** MT vs. Agg 0.93** 0.91** 0.92** 0.92**
Final vs. Agg 0.95** 0.96** .94** 0.96** Final vs. Agg 0.96 **
0.96** 0.96** 0.96**
CA vs. Final 0.65 **
0.58** .62** 0.82** CA vs. Final 0.61 **
0.68** 0.65** 0.67**
CA Vs. MT 0.62** 0.60** .65** 0.78** CA Vs. MT 0.59** 0.62** 0.66** 0.64**
MT vs. Final 0.75** 0.80** .78** 0.85** MT vs. Final 0.81** 0.79** 0.81** 0.79**
**Correlation is significant at the 0.05 level **Correlation is significant at the 0.05 level

Table 9: correlations between different


assessments (Summer)
Summer L4 L3 L2
(N=1081) (N=495) (N=389) (N=197)
CA vs. Agg 0.67** 0.76** 0.72**
MT vs. Agg 0.94** 0.92** 0.87**
Final vs. Agg 0.95** 0.96** 0.94**
CA vs. Final 0.48** 0.64** 0.57**
CA Vs. MT 0.56** 0.60** 0.52**
MT vs. Final 0.84** 0.81** 0.72**

8
OQNHE Conference 2015, Muscat, 24-25 February
Quality Management & Enhancement in Higher Education

L1 is not included in this table because no new intake


for summer
**Correlation is significant at the 0.05 level

Discussion and conclusion


This study aims to examine the impact of continuous assessments (CA) on the final marks and
the correlation between CA marks and final examinations‟ marks. This study concluded that
while only 4% of students failed the CA, 26% failed the midterm exam, 22% failed the final and
23% failed the level. Thus, a large ratio of students passed CA but failed in the final exam and
they also failed the level. It indicates that the continuous assessment and the midterm exam are
both not preparing students for the final exam. This is also manifested in low correlation between
the CA and the final examination marks. Consequently, CA does not give practically the same
pass/fail rates as the midterm and final exams. It is surprising that CA in this study did not have a
strong impact on students‟ learning progress as measured by their results in midterms and finals
despite the fact that CA is generally used to make adjustment to teaching and learning and to
engage the students actively in learning.
It is known in education that the greater amount of feedback the student receive, the greater the
opportunity to learn from this feedback for next assessment. However, the problem might be
with nature and type of questions provided for the midterm and final exams are different than
those provided in the CA. There could be a possibility that the exam questions are also more
difficult for the final and this has lead students to perform less in the final exam than the midterm
exam. Especially for level 4 exam is centralized among 7 colleges and the questions are not
prepared in the Language center.
Several other studies have acknowledged the benefits of CA in helping students monitor their
progress and their understanding (e.g.Combrinck & Hatch, 2012; Everson, 2010; Birenbaum et
al., 2010; Rushton, 2005; Trotter, 2006). However, it is important to note that CA is linked with
teachers‟ skill and management (Yigzaw, 2013).Alausa (2006) contended that CA depends

9
OQNHE Conference 2015, Muscat, 24-25 February
Quality Management & Enhancement in Higher Education

heavily on teachers‟ skills in test construction and administration, their attitudes


and record keeping. Thus, we are not sure if teachers who prepared the exams are skillful in test
construction and administration.
The results reported in this study are aligned with the findings from previous studies. For
example, Iyabode (2012) found that students scored slightly better in the CA than in the
examination in English orals practical class. Likewise, Dery and Addy-Lamptey (2009) found
that students did better in continuous assessments than final exams in English Language and
Mathematics in Ghana. Pérez-Martínez et al (2009) also found that students obtained higher
score in the continuous assessment than those following the traditional final assessment for four
Engineering courses at the Universidad Politécnica de Madrid in Spain.
However, the results reported in this study conflict with the findings from previous studies (e.g.
Pudaruth et al, 2013; Ghani, Said, Ibrahim and Ibrahim, 2005;Oyedeji, 1994; de Sande et al,
2008; Daniel & Island, 2012; Emmanue & Clement, 2012;Aina& Adedo, 2013) where they found
strong correlation between examination scores and continuous assessment scores in different
subjects.
The weak correlation between the continuous assessment and the final marks reported in this
study indicates that maybe lecturers did not set their assessment criteria well. The high success
rate in the CA could indicate inflation in marks and this gives the students inaccurate indication
of their performance. This also make students feel overconfident and may not prepare
themselves adequately for the exams.
This study helps educators to decide on the best weightings to be used for continuous assessment
and exams. Results indicate that the current weight distribution given for the CA 20%, midterm
exam 30% and final exams 50% should be changed to suit the new college vision and mission of
implementing student-centered approach. This study recommends increasing the CA weight and
decreasing the weight given for the traditional assessment in the midterm and the final exams.
That is because the traditional examination measure cognitive aspects of learning while
continuous assessment is not only concerned with the cognitive aspect of the learner but also
considers other facets such as skills, attitudes and values. Hence, the weight for the continuous
assessment should be increased to 50% and the final exam should be decreased to 30% and 20%
for the midterm exam. There is a need to embed learning-centered assessment into the daily
routines of the classroom and must develop rich tasks to engage students. Moreover, the

10
OQNHE Conference 2015, Muscat, 24-25 February
Quality Management & Enhancement in Higher Education

continuous assessment should contain rich and engaging activities, to build


students language skills, such as journal and essays writing, debate, presentation and speaking.
However, in order to implement the CA successfully and correctly, lecturers have to be
professionally and attitudinally prepared. They should be trained to observe the learners more
intensely to assess their affective outcomes. They should be made aware and trained of the
requirements of the CA, its importance and how to implement it.
Further studies should be conducted on how to enhance the continuous assessment approaches in
order to make sure CA results reflect on students‟ learning and they make smooth progress in the
their learning.

References
1. Aina, J. K & Adedo, G.A. (2013). Correlation between continuous assessment (CA) and
Students‟ performance in physics. Journal of Education and Practice. 4(6), 6-9

2. Alausa, Y. A. (2004). Continuous Assessment in our schools: advantages and


problems. Kolin Foundation Arandis, Namibia.

3. Birenbaum M, Breuer K, Cascallar E, Dochy F, Dori Y, Ridgeway J, Wiesemes R


2010. A Learning IntegratedAssessment System. European Association for Research on
Learning and Instruction

4. Boud, D. (1990). Assessment and the promotion of academic values. Studies in


HigherEducation, 15(1), 101-111

5. Brown, G. A., Bull, J., & Pendlebury, M. (2013). Assessing student learning in higher
education. Routledge.

6. Coll C, Rochera M, Mayordomo RM, Naranjo M 2007. Continuous assessment and


support for learning: An experience in educational innovation with ICT support in
higher education. Electronic Journal of Research in Educational Psychology, 5(3): 783-
80 4.

7. Combrinck, M., & Hatch, M. (2012). Students‟ Experiences of a Continuous Assessment


Approach at a Higher Education Institution. J Soc Sci, 33(1), 81-89.

8. Gibbs, G. (2006). How assessment frames student learning. In C. Bryan & K. Clegg
(Eds.), Innovativeassessment in higher education (pp. 23–36). London: Routledge.

9. Ghani, E. K., Said, J., Ibrahim, M. K., & Ibrahim, Z. (2005). Continuous assessments and
their contribution towards students' performance and final grade achievement: a
comparative analysis of the Diploma In Accountancy students.

11
OQNHE Conference 2015, Muscat, 24-25 February
Quality Management & Enhancement in Higher Education

10. Daniel, I. O. A., & Island, V. (2012). Comparison of Continuous Assessment and
Examination Scores in an English Speech Work Class. International Journal of Applied
Linguistics and English Literature, 1(6).

11. de Sande, L. Arriero, C. Benavente, R. Fraile, J. I. Godino-Llorente, J.Gutiérrez, D. Osés


and V. Osma-Ruiz. (2008). A Case Study: Final Exam v/s Continuous Assessment Marks
for Electrical and Electronic Engineering Students.

12. Dery, R. G., & Addy-Lamptey, W. (2006). Effects of classroom assessment scores on the
final scores used in grading students at senior high schools in Ghana.

13. Emmanuel, I., & Clement, C. O. (2012). Effect of Continuous Assessment Scores on the
Final Examination Scores obtained by Students at the Junior Secondary School (JSS)
Level in Mathematics. Educational Research (2141-5161), 3(9).

14. Lopez D, Herrero J R, Pajuelo A (2007). A Proposal for Continuous Assessment at


Low Cost. Paper presented at conference Frontiers in Education in Milwaukee,
Wisconsin,

15. Iyabode Omolara Akewo .(2012). Comparison of Continuous Assessment (CA) and
Examination Scores in an English Speech Work Class. International Journal of Applied
Linguistics & English Literature . Vol. 1 No. 6; November 2012

16. McGonigal, K. (2006). Getting More “Teaching” out of “Testing” and “Grading”.
http://ctl.stanford.edu/Tomprof/postings/738.html.

17. Ndebele, C., & Maphosa, C. (2013). Exploring the Assessment Terrain in Higher
Education: Possibilities and Threats: A Concept Paper. Journal of Social Sciences, 35(2),
149-158.

18. Oyedeji, O. A. (1994) Validity of Continuous Assessment Scores as Predictors of


Students' performance In Junior Secondary Examinations in Nigeria. University Of Ilorin,
Nigeria, 92.

19. Ramsden, P. (1992). Learning to teach in higher education. London and New York:
Routledge.

20. Pérez-Martínez, J. E., García-García, M. J., Perdomo, W. H., & Villamide-Díaz, M. J.


(2009). Analysis of the results of the continuous assessment in the adaptation of the
Universidad Politécnica de Madrid to the European Higher Education Area. In Proc.
2009 Research in Engineering Education Symposium.

21. Pintrich, P.R., Marx, R.W., & Boyle, R.A. (1993). Beyond cold conceptual change: The
role ofmotivational beliefs and classroom contextual factors in the process of
conceptualchange. Review of Educational Research, 63, 167-199.

12
OQNHE Conference 2015, Muscat, 24-25 February
Quality Management & Enhancement in Higher Education

22. Pudaruth, S., Moloo, R., Chiniah, A., Sungkur, R., Nagowah, L., and Kishnah. S. (2013).
The Impact of Continuous Assessments on the Final Marks of Computer Science
Modules at the University of Mauritius. Proceedings of Global Engineering, Science and
Technology Conference 3-4 October 2013, Bay View Hotel, Singapore.

23. Schrag, F. (1992). Conceptions of knowledge. In: P. Jackson (Ed.). Handbook of


research on curriculum (pp. 268-301). NY: MacMillan.

24. Stiggins, R. J. (2002). Assessment crisis: The absence of assessment for learning. Phi
Delta Kappan, 83(10), 758-765.

25. Trotter, E. (2006). Student perceptions of continuous summative assessment.


Assessment& Evaluation in Higher Education, 31(5), 505-521.

26. Walvoord, B. E. (2004). Assessment clear and simple. San Francisco: Jossey- Bass.

27. Yigzaw, A. (2013). High school English teachers and Students perceptions, attitudes and
actual practices of continuous assessment. Educational Research and Reviews, 8(16),
1489-1498.

13

View publication stats

You might also like