You are on page 1of 6

Available online at www.sciencedirect.

com

ScienceDirect
Procedia - Social and Behavioral Sciences 237 (2017) 869 – 874

7th International Conference on Intercultural Education “Education, Health and ICT for a
Transcultural World”, EDUHEM 2016, 15-17 June 2016, Almeria, Spain

Acquisition and evaluation of competencies by the use of rubrics.


Qualitative study on university faculty perceptions in Mexico
Leticia C. Velasco-Martíneza*,, Frida Díaz-Barrigab &Juan-Carlos Tójar-Hurtadoa
a
Universidad de Málaga , Av. de Cervantes, 2, 29071, Málaga, España.
b
Universidad Nacional Autónoma de México, Av. Universidad 3004, Col. Copilco-Universidad, Del. Coyoacán, 04510, Ciudad de México, México.

Abstract

New trends in university level teaching lean heavily towards an evaluation model closely linked to the concept of formative
assessment that fosters a competency-based approach. In this context, evaluation rubrics are among some of the most useful tools
to obtain evidence on competency acquisition. In this sense, the purpose of this study was to gain knowledge on Mexican faculty
perceptions and conceptions regarding the assessment of competencies and complex learning at the university level by the use of
evaluation rubrics. The intention was to determine the extent to which the faculty is evaluating competencies or, if to the contrary,
they still lean more towards the evaluation of disciplinary aspects. The study also sought to gain knowledge on the direction and
scope of rubrics as an innovative resource, taking into account the technical and pedagogical aspects present in their design and
implementation. The methodology used was in-depth interviews in order to gain a detailed understanding on the faculty´s
experiences and perspectives regarding competency assessment by the use of rubrics. The results and conclusions of the study
allowed for the identification of formative needs of Mexican faculty members who wish to use competency-based assessment.
Discussion with other scholars doing work in Europe allowed us to examine the divergences and convergences in the manner these
assessment tools are conceived.
©©2017
2016TheTheAuthors.
Authors. Published
Published by by Elsevier
Elsevier Ltd.Ltd.
This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
Peer-review under responsibility of the organizing committee of EDUHEM 2016.
Peer-review under responsibility of the organizing committee of EDUHEM 2016.
Keywords: Competency based approach; evaluation of competencies; rubric; university faculty; perceptions.

* Corresponding author. Tel.: +34-952-132-543; fax: +34-952-132-575


E-mail address: leticiav@uma.es

1877-0428 © 2017 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
Peer-review under responsibility of the organizing committee of EDUHEM 2016.
doi:10.1016/j.sbspro.2017.02.185
870 Leticia C. Velasco-Martínez et al. / Procedia - Social and Behavioral Sciences 237 (2017) 869 – 874

1. Main text

The new European Higher Education Area (EHEA) and the new Latin American Education Area (Spanish acronym
ALCUE) face challenges in the development and implementation of the emerging competency-based approach.
The initiatives contributing to the expansion of this approach originated in the Tuning Project in Europe and Latin
America, the purpose of which was to create a common area of curricular convergence among the different countries
in order to acquire and share information and to improve collaboration networks among higher education institutions.
In this sense, Yániz (2008) examines some of the main implications of incorporating competency-based curricular
designs into the university education process: a) designing university training as a project; b) ensuring intentional
work for all competencies contemplated in the academic-professional profile by identifiable actions; c) defining the
competencies included in each project and making them comprehensible for all those involved; d) promoting
methodologies that foster active learning and bring about closer ties between professional and social reality and
training; e) making use of valid assessment procedures (that actually evaluate that which is meant and stated to be
evaluated).
From this perspective, Villa & Poblete (2011) state further that it is not very sensible to join in efforts meant to
foster changes in pedagogical strategies and in curricular designs by modifying methodologies, programs and activities
with a simple approach to assessment that is restricted to the acquisition of knowledge. In fact, according to these
authors, research conducted at the University of Deusto which examined more that 200 teaching programs that were
assessed annually, showed that the greatest difficulty encountered was not assessing how competencies are developed
but how they are assessed (Villa & Poblete, 2011, 151). Following this line of reasoning, from the competence
development perspective the importance of establishing a correspondence between the competencies to be assessed
and the chosen evaluation strategy or technique is highlighted. When this premise is not taken into account, certain
procedures are implemented that are not useful in evaluating that which is actually meant to be assessed. This
discordance arises simply because an equivocal concept of competency is the starting point, or is the result of a lack
of knowledge on more adequate evaluating tools. In other instances, it is the case that the assessment of certain
competencies is more difficult because it requires more time and effort, and simpler evaluation strategies are sought,
even if not suitable. This situation, still very frequent in university classrooms, begs the question on the suitability of
the evaluation tools and techniques that have been traditionally in use at the university level. This phenomenon also
explains the surge during recent years in the use of more dialogical and comprehensive evaluation tools than traditional
pen and paper tests (Ibarra & Rodríguez-Gómez, 2010; Padilla & Gil Flores, 2008).
In this regard, the rubric is considered an innovative teaching tool that fosters competency assessment (Baryla,
Shelley & Trainor, 2012; Blanco, 2008; Cebrián, 2014, Díaz-Barriga & Barroso, 2014). Keeping in mind the
competency-based approach, the rubric is understood as an assessment scale, used preferably by the teacher, and even
by the student for self- and peer evaluation, to assess competency descriptors, according to a series or relevant
dimensions that can be quantifiably and qualitatively assessed in relation to a reasoned gradual scale and, at the same
time, shared by all participants.
One of the greatest advantages offered by the use of rubrics is that they not only foster improved systematic
evaluation on the part of the teacher, but that they are tools of great value in the acquisition by students of monitoring,
self-evaluation and peer-evaluation competencies, allowing them to develop an improved understanding of their own
learning process, which translates into an increased development of their independence and self-regulation (Stevens
& Levy, 2005).
Likewise, the availability of a specific rubric allows teachers greater coherence when issuing a value judgment
regarding a particular grade, and also ensures that each student will be evaluated with the same criteria as those used
for his or her classmates, thus overcoming arbitrariness, inconsistencies or subjectivity in the assessment and,
consequently, reducing the margin of error in grades (Raposo & Sarceda, 2010). Being mindful also that rubrics are
not only useful as tools to grade the quality of students’ end papers, but also serve as a basis for the examination of
and feedback on the learning process (García-Ros, 2011).
Furthermore, the formative value of rubrics is evidenced when they are used, agreed to by consensus and socialized
with the class or group before they are administered, thus allowing the students to adopt evaluation criteria as their
own, helping them to bring their results closer to those expected (self-evaluation), to reflect on their own potential and
Leticia C. Velasco-Martínez et al. / Procedia - Social and Behavioral Sciences 237 (2017) 869 – 874 871

to detect difficulties and even to learn to ask for help when they do not have the resources needed to overcome such
difficulties (Chica, 2011; Manríquez, 2012).
Moreover, rubrics foster students’ learning self-regulation, allowing them to reflect on the feedback obtained, plan
tasks, check on their own progress and review their work before it is presented, improving their performance and
reducing anxiety levels (Eshun & Osei-Poku, 2013; Panadero & Jonsson, 2013). Therefore, the assessment process
thus conceived goes beyond checking results and allows students to identify their strengths and weaknesses.
Since their introduction to the university milieu, their use and application to assessment systems has gradually
increased. Nevertheless, many research papers show that there is not yet a solid corpus of knowledge providing
evidence of the use of rubrics by teachers to assess competencies and of their impact on students’ performance
(Panadero & Jonsson, 2013; Reddy & Andrade, 2010; Stevens & Levi, 2005).

2. Objectives

The purpose of this study was to gain knowledge on Mexican faculty perception and conceptions regarding the
assessment of competencies and complex learning at the university level by the use of evaluation rubrics. The intention
was to identify to what extent do faculty members assess competencies or, if they otherwise continue to favor the
assessment of disciplinary aspects. We also sought to gain knowledge on the direction and scope of rubrics as an
innovative resource, taking into account the technical and pedagogical aspects present in their design and
implementation.

3. Method

3.1. Procedure

Research was conducted under a qualitative approach in order to obtain in-depth knowledge on one of the different
study variables that were addressed during the interviews conducted among faculty members, researchers and the
heads of the university educational institutions and centers. The interviews conducted were not intended to be
statistically representative of the population subject to research, but were meant to acquire a diversity of conceptual
perspectives, and gather information on practical situations and experiences that would help us to better understand
the scope and direction of some of the key dimensions reflected in the issues regarding competency assessment within
the university context.

3.2. Sample

A sample was taken from faculty members, researchers and heads of Mexican university educational institutions
and centers that were conducting innovative projects in competency assessment where the rubric was being
implemented as an innovative resource to be used by faculty members in their teaching practice.
We chose to take a non-probabilistic sample of an incidental nature, by requesting voluntary participation by key
informers.

3.3. Study variables and information gathering instruments and techniques

In undertaking the research, nine interviews among faculty members, researchers and directors of the Faculty of
Medicine, the Faculty of Psychology, the Faculty of Electronic Science and the Institute for Research on the University
and Education at the National Autonomous University of Mexico and the Autonomous University of Puebla were
conducted. By using this technique to gather information we intended to gain knowledge on the faculty members and
researchers’ opinion and viewpoint on the study variables listed in the following table:

Table 1. Variables under study by means of the interviews conducted with faculty, researchers and university school and center directors.
Categories Study variables
872 Leticia C. Velasco-Martínez et al. / Procedia - Social and Behavioral Sciences 237 (2017) 869 – 874

Project data
Objectives
Participants The innovation Project
Qualifications and educational levels involved
Result of the project
Alternative evaluation instruments used instead of a rubric
Collaborative work in competency assessment
Faculty social and economic status
Educational policies associated to competency assessment
Treatment of competencies in assessment
Good practices in competency assessment
Competency assessment impact on educational processes
Bad practices in competency assessment
The role of students in competency assessment
Innovation status and changes in teaching and assessment
Reasons for the absence of authentic learning
Faculty training relating to competency assessment
Teaching models that exert an influence on the assessment concept
Perception of the rubric in regard to the field of knowledge
Faculty members’ intent in implementing the use of the rubric
Products from students subject to rubric assessment
Positive effect on students of rubric assessment
Positive effect on faculty of rubric assessment
Disadvantages of rubric assessment for students
Disadvantage of the use of rubric assessment for faculty Knowledge on, design and use of the rubric as an assessment tool,
Visualization or perception of the use of the rubric taking into account the technical and pedagogical aspects of rubric
Rubric design aspects design

Rubric application
Suggestions on rubric application
Rubric validation

3.4. Analysis

The analysis conducted was qualitative in nature, and took into consideration the verbatim expressions aired during
the interviews. The verbatim expressions, as well as fragments of text and statements were present both at the time of
selection as well as throughout the process of categorization, analysis and the drawing of conclusions (Tójar, 2006).
The information gathered was reduced, organized and used to draw and verify conclusions. Classification and
categorization techniques were used as well as models and typologies. Through a process of increasing abstraction
the data obtained were incorporated into a theory (Goetz & LeCompte, 1988). On the basis of the analysis processes
mentioned above, matrices and explanatory charts were drawn (Miles & Huberman, 1994). To facilitate qualitative
analysis the Atlas.ti v7.0 (2012) software was used.

4. Results

Once the analysis of the transcripts of the 9 interviews had been completed, 781 verbatim statements or text
fragments were selected. With these fragments 473 codes were prepared that were organized into 34 families. Taking
into account that some of the citations were codified and recodified several times, the total number of citations or
fragments per code was 6446 units.
Leticia C. Velasco-Martínez et al. / Procedia - Social and Behavioral Sciences 237 (2017) 869 – 874 873

Once recategorization was done on the basis of the regrouping of code families, two main macro-categories were
built which are simultaneously related:
x Rubrics, comprising the following families: “Rubric disadvantages”, “Disadvantage of rubric assessment for
faculty”, “Disadvantage of rubric assessment for students”, “Rubric design”, “Positive effect on students of rubric
assessment”, “Positive effect on faculty of rubric assessment”, “Products assessed with rubrics”, “Rubric.
Advantages. Design.”, “Suggestions for rubric application”, “Rubric validation”, “Rubric advantages” and
“Rubric visualization”.
x Competencies, comprised by the following families “Good practices in competency assessment”, “Training”,
Competencies assessed in projects”, “General teaching conditions at universities”, “Confusion regarding the term
‘competencies’”, “Students and competency assessment”, “Status of innovation and changes”, “Test”,
“Alternative assessment instruments”, “Bad practices in competency assessment”, “Teaching model”,
“Educational policies”, “Competency areas”, “Types of learning”, “Collaborative work”, “Treatment of
competencies in regard to faculty”, and “treatment of competencies in regard to the development and application
of competencies”

Table 2 shows an example of the citations in each family belonging to the Rubric macro-category.

Table 2. Codes and examples of textual citations of the Rubric macro-category.


Codes Verbatim citations
“the advantages are having a clear idea of what is being assessed, it’s for both, the teacher
and the student, but first I’ll tell you about the teacher, I believe it is clear for him or he
Rubric advantages has systematized his forms of assessment, they already have those instruments and it can
be more strategic also when teaching in class, because, let me tell you, a rubric shows them
that things are not as expected” (5:71, 214:214)
“…they are used scarcely, but more difficult to prepare, they involve greater intervention
Rubric disadvantage of expert judgment to assess the student, it is less easy to say success or failure, that is to
say, to be able to classify under a dichotomous situation” (1:39, 14:14)
“the disadvantage is that it’s time consuming, to fine tune and test, conduct pilot tests and
agree to them by consensus, because students and teachers must agree, and that takes time
Disadvantage of rubric assessment for faculty
and sometimes in our field there’s considerable work load and then things are made by
approximation, I believe it will turn out ok,… (1:92, 66:66)”
Disadvantages of rubric assessment for students “Many times what they see is that it involves more work” (5:78, 270:270
“rubrics are from my point of view pretty noble instruments, you can do anything with a
rubric, that is, you can assess, let’s see... from routine items, no? For example, how to
Rubric design
make a cake?, this is not to say that making a cake is not important, to something
more...ample, more complex” (4:22, 52:52)
Positive effect on students of rubric assessment “And their perception is that they learn more by using rubric assessment” (6:75, 282:282)
“yes I believe the faculty would be delighted not only by using rubrics for assessment, but
by engaging in their design, working on them with their classmates and say: why don’t we
prepare a rubric to assess, what if we work together on it with the students for peer-
Positive effect on faculty of rubric assessment
assessment, that is, what if we use them as learning and assessment tools, that is something
the faculty is lacking and I think they limit themselves and that the rubric is there and I can
download it from the internet- Ah it guides me” (7:80, 144:144)
Products assessed with rubrics “…oral presentations” (4:11, 21:21)
“they then end up by reducing the potential of the rubric and also for themselves because
they would reflect more, one reflects much more when... than when they just tell you to
Rubric. Advantages. Design. copy it, one reflects more when one is involved in preparing the rubric than if they give it
to you ready for its application because it shows you how simple it is, that which you are
going to assess, that is still pending” (7:81, 144:144)
“you became aware that they did not learn precisely the abstract part of ethics because it
Suggestions on rubric application was not included and resolved in this manner, and there will be a group that does have it
and you are going to say why and what should I give them?” (6:66, 235:235
874 Leticia C. Velasco-Martínez et al. / Procedia - Social and Behavioral Sciences 237 (2017) 869 – 874

“for the purpose of grading and assessment of your classes, well there are there more or
Rubric validation less available, for research purposes validation and reliability would have more weight,
without doubt” (7:38, 68:68)
Rubric visualization “I see it as a way to systematize subjective assessments made by many teachers for many
activities or for many products to assess students,…” (7:73, 192:192)

5. Conclusions

After completion of the qualitative analysis, the adequacy of the adopted phenomenographic perspective became
evident. From this perspective we were able to define codes and categories and organize them into various systems
and families. In addition, macro-categories were built that functioned as “theoretical” frameworks, represented by
means of comprehensive diagrams that helped to better understand the relationships among categories and dimensions
to analyze rubrics.
The qualitative analysis of the interviews with university faculty members, researchers and directors of the
educational centers and institutions in Mexico who were conducting innovative projects in competency assessment
allowed us to inquire into the conceptualizations they had regarding the design, preparation and application of rubrics.
Using diagrams similar to that shown in figure 1, it was possible to analyze the manner in which the faculty involved
in innovative projects conceive competency assessment and the application of alternative assessment strategies. Thus
we considered as adequate a qualitative strategy to build systems of categories and subcategories used to analyze the
conceptions (visualization), design and application of rubrics in a critical manner on the part of the members of the
Mexican faculty interviewed.
Thus, by building codes and families, corresponding to the Rubrics and Competencies macro-categories, it was
possible to propose comprehensive diagrams that functioned as theoretical frameworks in the analysis on the manner
the faculty conceives and acts upon its innovative teaching practice.

References

Baryla, E., Shelley, G. & Trainor, W. (2012). Transforming Rubrics Using. Factor Analysis. Practical Assessment, Research & Evaluation,17(4),
1-7. Available from http://pareonline.net/getvn.asp?v=17&n=4.
Blanco, A. (2008). Las rúbricas: Un instrumento útil para la evaluación de competencias. In L. Prieto (Ed.), La enseñanza universitaria centrada en
el aprendizaje (pp. 171-188). Barcelona: Octaedro.
Cebrián, M. (2014). Evaluación formativa con e-rúbrica: aproximación al estado del arte. Revista de Docencia Universitaria, 12(1), 15-2.
Chica, E. (2011). Una propuesta de evaluación para el trabajo en grupo mediante rúbrica. Escuela abierta: Revista de Investigación Educativa,14,
67-82.
Díaz-Barriga, F. & Barroso, C. (2014). Diseño y validación de una propuesta de evaluación auténtica de competencias en un programa de formación
de docentes de educación básica en México. Perspectiva Educacional. Formación de Profesores, 53(1), 36-56.
Eshun, E. F. & Osei-Poku, P. (2013), “Design Students Perspectives on Assessment Rubric in Studio-Based Learning. Journal of University
Teaching and Learning Practice, 10(1), 1-8.
Ibarrra, M.S. & Rodríguez-Gómez, G. (2010). Aproximación al discurso dominante sobre la evaluación del aprendizaje en la Universidad. Revista
de Educación, 351, 385-407.
Jonsson, A. & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review,
2(2), 130-144.
Manríquez, L. (2012). ¿Evaluación en competencias?. Estudios pedagógicos (Valdivia), 38(1), 353-366.
Panadero, E. & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: A review. Educational Research Review,
9, 129–144.
Padilla, M.T. & Gil, J. (2008). La evaluación orientada al aprendizaje en la Educación Superior: condiciones y estrategias para su aplicación en la
docencia universitaria. Revista Española de Pedagogía, 66(24), 467-486.
Raposo, M & Sarceda, M. C. (2010). El trabajo en las aulas con perspectiva europea: medios y recursos para el aprendizaje autónomo. Enseñanza
& Teaching, 28 (2), 45-60.
Reddy, Y.M. & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435-448.
Stevens, D.D. & Levi, A.J. (Eds.). (2005). Introduction to rubrics: on assessment tool to save time, convey effective feedback and promote student
learning. Sterling, Virginia: Stylus.
Tójar, J. C. (2006). Investigación cualitativa. Comprender y actuar. Madrid: La Muralla.
Villa, A. & Poblete, M. (2011). Evaluación de competencias genéricas: principios, oportunidades y limitaciones. Bordón, 63, 147-170.
Yániz, C. (2008). Las competencias en el currículo universitario: implicaciones para diseñar el aprendizaje y para la formación del profesorado.
Revista de Docencia Universitaria. 6 (1) 45-64.

You might also like