You are on page 1of 3
a Oat Practicality and Efficiency | Overview On top of planning, developing and organizing instruction and managing student conduct teachers also have the task of assessing student learning. Time and resources present themsrives as issues, For these reasons, classroom testing should be “teacher friendly”. Test development and grading should be practical and efficient. Practical means ‘useful, that is, it can be used to improve dlassroom instruction and for outcomes assessment purposes. It likewise pertains to judicious use of classroom time. Efficient, in this context, pertains to development, administration and grading of assessment with the least waste of resources and effort. ’ In this chapter, we will cover factors on practicality and efficiency that contribute to high quality assessments, These include teacher familiarity with the method; time required: complexity of administration; ease of scoring; ease of interpretation; and cost (McMillan, 2007). These factors would have to be considered and balanced with previously discussed principles perp page i re ew ih Intended Learning Outcome (ILO) ” At the end of the Chapter 5, students are expected to: | + Describe how testing can be made more practical and efficient. — Familiarity with the Method While it is best that teachers utilize various methods of assessment, it would be a waste of time and resources if the teacher is unfamiliar with the method of assessment. The assessment may not satisfactorily realize the learning outcomes. The teacher may commit mistakes in the construct or format of the assessment. Consequently, the results would not be credible. Any inference arising from the assessment results would not be valid or reliable. This may be a problem for some teachers who had been accustomed to pencil-and-paper tests. As pointed out in Chapter 3, some learning targets 7 f thumb is that simple learning targets require simple are best achieved using non-tests. A rule of a assessment, complex learning targets demand complex assessment (Nitko & Brookhart, 2011). From the foregoing, It Is critical that teachers learn the strengths and weaknesses of each method of assessment, how they are developed, administered and marked. You can see here how practicality and efficiency are intertwined with other criteria of high-quality classroom assessment. Time Required It may be easier said than done, but a desirable assessment is short yet able to provide valid and reliable results. Considering that time is a commodity that many full-time teachers do not have, it is best that assessments are quick to develop but not to the point of reckless construction. Assessments should allow students to respond readily but not hastily. Assessments should also be scored prompdy but not without basis. It should be noted that time is a matter of choice - it hinges on the tea a . choice of assessment method. For instance, a multiple choice test may take time to prepare bul Assessment of Learning # rt period. Moreover, th, ina relatively sho aoe lished by students met a arora The principle of ee ae aed ie toat scored especially cel ae the “yrtociple practicality and WSS Cagers. However, there is stil ace ay may be better Wn som efficiency dictates that \¢ appropriate for your Jearning tal gets” A with relatively fe their ideas IY fey allowed to express eT ie ae teachers utilize coat- occasions as students ae lowe consuming on the Pa he a Soa s/h Poder 5 Ess a ee their thoughts am and his/her students. ‘organize . that they are 26% Wit) require considerable sine oF me ome a ers « ia fo or wy r testing small groups oj rit tant of ei Eye ane gr, Meas yerformance assessments take a lo! eglpeeal oe Fae Gilioren end scoring but they offer an opportunity —— Care should be 1s of performance. d at different level learning tarts an 2 performance assessment take away {20 mu ex of instructional time. ents’ reusability. A multiple After considering the time issue, let us now aseuss the pea se eas when rope choice test may take a substantial amount of time in term Ore in large-scale, high-stakes constructed may again be read for diferent srours aes san issue fees both validity and testing. , itis also a pro reliably rif cae ness ere tem analyzed and the test itself was established to be 7 rd and reliable evidence of student performance, then many of the items can be used in succeeding terms or periods as long as the same learning outcomes are targeted. Thus, it is critical that objective tests are kept secure so that teachers do not have to prepare an entirely new set of test questions every time Besides, much time and energy were spent in the test construction, and maintaining a database o! excellent test items shows that tests can be recycled and reused. cdfictent asscsements One suggestion to improve reliability is to lengthen the assessment. The longer itis, the higher the reliability. McMillan (2007) claims that for assessments to provide reliable results, it generally takes thirty to forty minutes for a single score on a short unit. He added that more time is required separate scores are needed for subskills. He shared a general rule of thumb that six to ten objective items are required if assessing a concept or specific skill. Ease in Administration Assessments should be easy to administer. To avoid questions during the test or performance task, instructions must be clear and complete. Instructions that are vague will confuse the students and they will consequently provide incorrect responses. This may be a problem with performance assessments that contain long and complicated directions and procedures. Students’ efforts ar¢ futile if directions are not expressed directly and explicitly. If assessment procedures like in Scient experiments are too elaborate, reading and comprehending the procedures would consume time. Ease of Scoring Obviously, selected res ‘ponse format i extended-response essays. Itis also difh ae the easiest to score compared to restricted and more 5° research papers, among others. m: must be observed in rating to ensure ake use of rubrics, and while this facilitates scoring ¢a"® assessments, it ls more practical to ivity. McMillan (2007) s a individualized evaluations. “se rating scales and pert mer then write ofa Assessment of Learning 1 Ease of Interpretation Oftentimes, students are : S are given a score that r iia ceflects their knowledge, s ‘ However, this {s meaningless if such score is not interpreted. Knowledge, skills br, performance interpret. By establishing preted, Objet the test, By matching the . tive tests are the © epee . nm cl are the easiest to a erase teacher is able to determine right away if a student passed 2 OY ae hae Sone level of proficiency, teachers can determine if the student has mencadd ast ot ot Trae ‘mance asks, rubrics are prepared to objectively and expeditiously ice OF product, It reduces the time 5} i Loe om Ir ices the time spent in grading because t refer tothe d creas and per Hermanes levee in the rubric without the need to write long comments! ; have to be shared with the studei : Nene era a te tb ah ith the students so that they understand the Cost bine Serer becraney inexpensive compared to national or high-stakes tests. Hence, eing unable to come up with valid and reliable tests is simply unreasonable. As for performance tasks, examples of tasks that are not considerably costly are written and oral reports, debates, and panel discussions. Of course, students would have to pay for their use of resources like computers and copying facilities, among others. However, they are not as costly as some performance assessments that require plenty and/or expensive materials, examples of which are laboratory experiments, dioramas, paintings, role plays with elaborate costumes, documentary films and portfolios. Teachers may remedy this by asking students to consider using second-hand or recycled materials. While classroom tests may not be costly compared to some performance assessments, itis relevant to know that excessive testing may just train students on how to take tests but inadequately prepares them for a productive life as an adult, It is paramount that other methods of assessment ing targets. According to Darling-Hammond & Adamson (2013), open- are used in line with the learni ended assessments - essay exams and performance tasks - are more expensive to score, but they can support more ambitious teaching and learning. i McMillan (2007, p. nded that one chooses the most economical assessment. Mc ‘hieoemanad “economy should be thought of in the long run, and 88) in his explanation of economy said the arealabe ies expensive tests (or any assessment for that matter) may eventually cost ae in fare : i ill very popular especially in entrance a chool level, multiple choice tests are stil ] e relat simulated board examinations. However, high quality ee foo f i ial i he skills they need for a knowledg i ich is essential if students are to develop tl Se tripod ® ‘Adamson, 2013). That being said, schools must support performance assessments.

You might also like