You are on page 1of 3

Guest Commentary – Robert Harris

Full disclosure. I teach and coach. I have experience in the private sector before
becoming an educator. I have taught or attended public, private, parochial, and charter schools in
Northern Michigan, Detroit, Cincinnati, St. Louis, and Colorado.
I am beyond frustrated in the Michigan Department of Education (MDE) and the
application of the School Accountability Scorecard. The disappointment is due to the irrational
construct and application of the ranking system. There is a need to educate the public to prevent
the spread of inaccurate narratives.
The MDE determines school proficiency based on scores from the Michigan Merit Exam
(MME). All 11th-grade students in the state of Michigan take the MME. Results are analyzed for
the whole test taking student body and every sub category of pupils in a school. There are 11 sub
categories including African American, English Language Learners, White, Bottom 30% of
Students, Two or more races, Students with Disabilities, and Hispanic.
Per the requirements of the ranking system, there are three main School Accountability
Scorecard performance areas. The first major area is participation. The requirement for this area
is 95% participation for the school as a whole and in each subgroup. Another main area is
proficiency in content for all subgroups and the school in its entirety. Lastly, a graduation
requirement of 80% for the school overall and all subgroups exists.
The construction of the School Accountability Scorecard utilizes two options for
participation. A green mark is given to a school or subcategory where 95% or more of students
are present. A red mark accompanies less than 95% participation.
From a perspective of clarity, the red and green participation structure facilitates
measurement. Students take the test, or they do not. However, the problem with black and white
is an inability to measure varying hues. We are all familiar with the old saw regarding life’s
resemblance to shades more aligned with gray.
The primary concern with the binary structure of participation relates to the impact of this
component in the overall formulation of the School Accountability Scorecard. One red mark for
the school or any subcategory automatically shifts the final School Accountability Scorecard

ranking to yellow at best. Thus, a school can hit every academic target and miss the 95%
requirement by the slightest of margins in participation and receive a yellow or middle ranking.
Another problem with the red and green participation structure relates to choice. Families
can opt out of the MME test without penalty. However, there is no differentiation between drop
and absence for the MDE. Certainly there are inconsistencies with a school's ability to control
attendance and the rights of families.
The issue of partial participation exists as well. The English portion of the MME test is
subdivided into reading and writing sections. Failure to participate in one of the English
subsections counts as no participation at all.
To put the concern in the context of an example, let us create a hypothetical Central High
School (CHS). CHS hits every proficiency benchmark as a school and in all eight subcategories
in the school. Furthermore, all applicable graduation requirements are achieved. At this juncture,
many lay and expert could reasonably conclude that CHS is doing a fine job. Now let us
hypothesize that one sub category of the eight in existence at CHS misses the 95% participation
by 3%. Per the logic of the MDE's School Accountability Scorecard, the school would receive a
yellow ranking. In the context of the example of CHS, the yellow ranking would be less than
representative of the quality of the school during the ranking cycle.
Now that irrationalities in the MDE’s School Accountability Scorecard have been
established let us backup and inspect the subtleties of the data tool. Students derive no measured
academic benefit from taking the MME. No grades are given. At best, this creates a very
precarious incentive structure for students. Why should a student wrestle with a test that does not
help them academically? Not sure how mature you were as a sixteen or seventeen year old High
School Junior. Rest assured, I left something to be desired from a maturity perspective at that age.
There are inherent concerns regarding students and their motivation on the MME.
Given the imperfections of student incentives on the MME, the validity of a ranking
system based on one assessment is poor design. It is poor practice to look at one event and
extrapolate the outcome as valid for the individual or organization as a whole. Relying on one
observed strike out from Miguel Cabrera and formulating his value as a poor hitter is not

representative of his performance. However, this is exactly the practice of the MDE in the
application of the School Accountability Scorecard.
The vast majority of educators I know do exceptional work in a difficult field. At a
minimum, one would hope that the MDE could provide a more rational assessment construct than
the School Accountability Scorecard based on student performance from one non-incentivised
data point. The least we can do is allow a more accurate narrative regarding our schools to be put
forth for public consumption.