You are on page 1of 4

PROPOSALS FOR FINAL TABULATION & SELECTION

Summary of subcommittee meeting of Thursday 1/23/14



The following are presented as topics for discussion and/or proposals to be voted on in the
general committee meeting of February 7.

Scoring vs. Ranking There was some concern expressed that in the first screen, some
screeners scored programs all 4s or all 1s. If we use rank instead of score, then no
individual scorer can exert disproportionate influence on the result. We also avoid the
dilemma of two programs separated by only a few hundredths.

How ranking would be implemented There are a number of scenarios. Our favorite proposal:
each member fills out a screen for each curriculum. Once Adam has collected all the
screens and averages, he converts all 4 screens from each screener into a ranking.

Example: Committee member Q has scored the curricula as follows:

Program A - 2.6
Program B - 2.0
Program C - 2.9
Program D - 1.8

This is converted into the ranking:

Program C = 1
Program A = 2
Program B = 3
Program D = 4

Presentation to the committee

First, community input will be tabulated and a summary report produced for everyone to
review. This summary should be published ASAP, preferably before the general meeting.

At the general meeting Adam will present the rankings in a table, like so:


Program A Program B Program C Program D
Rank 1s 12 8 5 1
Rank 2s 5 7 2 11
Rank 3s 4 6 11 4
Rank 4s 4 8 6 7

Instant Runoff Voting (IRV) In the event one program gets a simple majority of Rank 1s on
the first round, that program is selected. In our committee of 28 members, 15 constitutes a
simple majority.
In the more likely event that no single program garners a simple majority, we would
eliminate the program with the fewest Rank 1s. In the above example, that would be
program D. After eliminating D, D would be removed from everyones ranking altogether
effectively giving us a ranking of the top 3 programs. In particular, the one person
who ranked D highest would see their second choice promoted to a Rank 1, their third
choice to a Rank 2, and their fourth choice to a Rank 3. For those of you conversant with
eXcel, think of deleting all rankings of D and moving all data up one row. Hope this helps.
In the event there is a tie for last place in Rank 1s, we will look at Rank 4s for those two
programs. Whichever program has the most Rank 4s will be eliminated. Bye-bye!
This process continues until one program garners a simple majority.




Raters complete new screener: (whether scoring or ranking)

Scoring/ranking proposal:

We will summarize each rater by their 1st 2nd 3rd etc. overall choice where the ranks are
determined by the scores derived from their scoring shee.

Example: e.x., For each curriculum comput X=the sum of the core scores for the rater,
Z=sum of the ease of use scores for the rater, and Y=0.6X+0.4Z. Each rater will get one Y
for each curriculum, and the order of the Ys determine the order of their preferences.

Properties: If we use ranks instead of scores then no individual scorer can exert
disproportionate influence on the result. A ranking system does not downgrade the influence
of any individual scorers, it upgrades the influence of all scorers to the same level.

[Option 2: Same as above but where instead of scoring each criterion we rank, and combine
the ranks using the same procedure. Arguments against this are summarized below].

Process proposal: How we use this to come up with a final result.

* Meet and discuss community input, and then rankings are revealed; have open discussion
regarding options allowing community and each other to sway each of us to revise our
scores.

* Allow people to revise their screener (maybe allowing them to provide a sentence or two
about what it is that convinced them to change it).

* Retabulate the rankings and display the number of 1st choices per option.

* Eliminate one with a) fewest first place votes (tie being fewest second place votes), or b)
most last place votes.

* Repeat until one is remaining:

Conclusions
We are sure:
That each committee member will ultimately produce a ranking derived from their new
scoring worksheets, and that this is how we will consolidate everyones input (using
something in the genus of IRV) and determine a top 1 or 2
Because (we have a whole powerpointfull of reasons)
That we will run a first pass, meet to discuss (including the community), and revise our
scores
Because discussion will make sure everyone is informed and provide an opportunity to
involve community feedback.
We will all be permitted to revise our scores based on this ranking prior to a retabulation
of the final ranks for each rater.

We think probably:
That each individual screeners ranking will be based on a total score output by our screener
because this makes everyone put in the effort
Alternatively: each criterion is ranked by every screener
but we dont like this because it is a lot of work and even more tabulation, it opens up far
decisions on how to add up ranks to achieve the final overall ranks, and could also
appear to outside people that we are just picking our favorites rather than carefully
considering the criteria.
That each individual screener will have the opportunity to subjectively break ties on their
ranking
This is vulnerable to slimy dealings but which are easy to detect.
But we are not going to deal slimily
Any other way will be less neat
That after we run the IRV algorithm and come up with a top 2, we will meet to discuss and
decide between these.
Because its only a few more hours and potentially useful with a binary choice
That we will not meet to discuss a third time
Because this will take a lot of time and will probably not change opinion much

something in the genus of IRV:
Does not really matter what
Proposal: eliminate the program with the fewest top votes; as a tiebreaker, eliminate
whichever has more last place votes
Retally and repeat







Example of report:

A B C D
first 9 9 5 1
second 3 7 2 11
third 6 4 11 4
fourth 4 8 6 7


1 2 4 5 7 8 9 10 11 12 13 14 15 18 20 21 22 23 24 25
mymath.csv.core 3 1 1 1 1.0 4 3 4 3 1 2 4 3 3 3 2 2 2.5 4 2.5
env.csv.core 3 2 3 2 2.0 2 2 1 1 3 1 2 2 2 2 4 1 1.0 3 2.5
go.csv.core 3 3 4 3 3.5 1 1 3 4 2 4 3 1 1 4 1 4 2.5 2 4.0
mif.csv.core 1 4 2 4 3.5 3 4 2 2 4 3 1 4 4 1 3 3 4.0 1 1.0