You are on page 1of 8

Review of Healthy Maine Partnership (HMP) Selection Process and Scoring Matrix

(4/26/2013) Introduction
This review was conducted to evaluate the integrity of the methodology and scoring system that was designed by the Maine CDC to assist in the selection of nine Lead Healthy Maine Partnerships (HMPs). These HMPs included eight community coalitions, one in each District and one tribal coalition. A substantive reorganization and consolidation of the HMP program was prompted by a reduction in funding that was approved by the Legislature from approximately $7.5 million to $4.7 million to the Healthy Maine Partnerships. The reorganized HMPs include both school and community settings. The goal of the reorganization was to reduce administrative overhead, duplication of work, and reduce administrative burden for the state (9 contracts versus 27). Another primary intent was to focus and prioritize limited resources on those health factors that put people at the most risk and would likely provide the greatest return on investment.

About This Review:


Multiple sources of information were used in order to gain an understanding of the methods and process used by the ME-CDC to evaluate and select the Lead HMPs, including: Face-to-face and telephone discussions with CDC Administrators responsible for oversight of the partnerships as well as the development and implementation of the HMP selection process. These individuals included, Christine Zukus, Sharon Leahy-Lind and Andrew Finch. Review of multiple documents and data sources, including: o HMP Performance and Scoring Matrix; o Summary of HMP Selection Process; o Survey Monkey Data, including response from Project Officers and District Liaisons.

Description of HMP Lead Selection Process


In order to select Lead Partners, all HMPs were assessed on the following core content areas:

1. 2. 3. 4.

Demonstrated ability to meet the expectations of the contract; Efficient use of public resources; Extent to which they demonstrated a collaborative partnership with the ME-CDC; and Ongoing support and promotion of new and developing public health infrastructure.

The selection process and scoring methods were developed and shaped by a collaborative work group of ME-CDC Central Office Administrators including: Christine Zukus, Lisa Sockabasin, Sharon Leahy-Lind, Andrew Finch, and Debra Wigand. This work group developed, evaluated, and approved the content areas to be evaluated, the performance measures to be assessed, and the final scoring methodology used in the selection of lead HMPs. Based on the core content areas above, the following performance measures were used to evaluate the HMPs: 1. Operating Costs and Administrative Efficiency: This measure was designed to assess the proportion of overall contract costs attributed to general and administrative services. Formula: (Total G & A costs/(Total SFY 2012 contract costs school health coordinator funding)). Percentages were derived for each HMP. Based on these percentages, HMPs within each district were ranked with those coalitions with lower percentages of G & A receiving higher ranks. 2. Compliance with Salary Guidelines: This metric assessed the degree to which HMP salaries complied with the salary guidelines established for the HMPs (RFP 20101078) and compared the hourly salary rate for each SFY budget to the established salary guidelines. The measure was the percentage of salaries within each HMP that were within the guidelines. Ranks were assigned within each district. HMPs with higher percentages received higher ranks. 3. Support and Promotion of Developing Public Health Infrastructure: As described by ME-CDC HMP Program Administrator, this measure was included to assess the degree to which each coalition demonstrated knowledge and leadership in supporting, promoting, and developing their local Public Health (PH) Infrastructure. Ratings were completed by two CDC Program Administrators based on their knowledge of coalition activities and progress made in this area as well as on District Liaison ratings on Question 6 thru Question 10 (See Attachment 1 for District Liaison Questions). Each HMP was rated on a 1-5 Likert Scale with higher scores reflecting greater support and promotion of local Public Health infrastructure.

4. Project Officer (PO) Ratings: A series of nine questions were designed to assess HMPs on the degree of collaboration and cooperation with the ME-CDC, capacity to serve the district, and efficient use of resources. Each item was rated on 1-5 Likert scale with 1 indicating low performance and 5 high performance. The data was collected via a conference call with the eight POs representing each District. The conference call was facilitated by a ME-CDC program administrator. During the call, the facilitator asked each of the questions. The POs were instructed to enter a rating in a survey monkey template that was distributed to them prior to the call (See Attachment 1 for the Project Officer Questions). Given scheduling conflicts and time constraints on the call it was necessary to conduct follow-up interviews with the POs that were unable to attend the call or stay for the entire interview. A total score for each HMP was derived by summing the ratings across the nine items. Scores could range from a low of 9 to a high of 45. Based on their total score, HMPs were ranked within each district with higher ranks associated with higher scores on this measure. 5. District Liaison (DL) Ratings: Similar to the PO questions, a set of 14 questions were designed to assess the degree of collaboration and cooperation with the ME-CDC, level of support of public health infrastructure, and capacity to serve the district based on the experiences of DLs in each of the eight districts (See attachment 1 for DL Questions). Each question was rated on 1-5 Likert scale and assessed the extent to which each HMP performed in each area with 1 being low performance and 5 high performance. The data was collected via a conference call with the eight DLs. The conference call was facilitated by a ME-CDC Program Administrator. During the call, the facilitator asked each of the questions. The DLs were instructed to enter a rating in a survey monkey template that was distributed to them prior to the call. Given scheduling conflicts and time constraints on the call, data collection could not be completed in one call and required follow-up interviews with the DLs that were unable to attend the call or stay for the entire interview. A total score for each HMP was derived by summing the ratings across the 14 items. Scores could range from a low of 14 to a high of 70. Scores for each HMP were ranked within each district with higher ranks reflecting higher scores on this measure. Note: DL responses to Q12, Q13 and Q14 pertaining to the capacity to serve the district (See Attachment 1 for DL Questions) were never obtained in the Western District and all questions in this content area, Q11 to Q14 were excluded from the scoring in that District only. Scoring: A total score was derived for each HMP by summing the ranks/ratings assigned to each of the five measurement areas outlined above. Two of the measurement areas were weighted including: # 3) Support and Promotion of Infrastructure (consisting of one scale

item) and #4) the Project Officer ratings. The weights assigned effectively doubled scores in both these measurement areas. The rationale provided for assigning greater priority to these areas was the recognition of the CDCs ongoing investment in developing a district level public health infrastructure and the recognition that the Project Officers level of experience and familiarity with the relative strengths and weaknesses of the HMPs was a key aspect to the assessment. In the case of tie in aggregate scores between HMPs in a given
district, a tie breaker was used. The tie breaker consisted of the measure of completion of tobaccorelated and physical activity and nutrition-related milestones as reported by each grantee in the HMP KIT monitoring system. This score was the percentage of completion of milestones with the HMP completing the highest percent of their milestones receiving the highest score. The scoring procedures generated only one tie in the Central District.

Review Findings:
Strengths: A collaborative work team was established to identify the scoring content areas, performance measures, and scoring methodology to assess and select the nine lead HMPs. HMPs were assessed on a number of attributes using multiple measures. These measures included both objective cost and operational efficiency measures and more experiential rating measures. The assessment involved input from multiple individuals including CDC Program Administrators, eight District Liaisons and eight Project Officers all who had extensive knowledge and experience with the operations and activities of the HMPs. Multiple questions were developed to rate HMP performance in those content areas that were determined to be most important in evaluating the performance of lead coalitions (see above). Ratings were based on 5-point Likert scales and then aggregated (summed) to obtain an overall score based on the experiential ratings obtained from POs and DLs. Note: Aggregate measures, when constructed appropriately, generally provide a more stable and reliable assessment of an attribute than ratings based on a single item. A survey monkey methodology was conducted via conference call discussion with both POs and DLs. Only question numbers and response options were included in the survey monkey template provided to PO and DL respondents. All questions were asked by program administrators on separate conference calls with DLs and POs. This approach assured that PO and DL ratings were completed independently by each rater and were as unbiased as possible.

A review of the raw ratings obtained directly from Survey Monkey data indicated that all PO and DL scores used in the Scoring Matrix were accurately aggregated from the raw data. Weaknesses: The scoring procedures over emphasized experiential (more subjective) ratings that were obtained from DHHS employees, including: Program Administrators, District Officers and District Liaisons rather than more objective and robust measures of coalition leadership and performance. The overall importance attributed to these elements was further exaggerated by the weights applied to the ratings of HMP Support and Promotion of Developing Infrastructure and the PO ratings that resulted in a doubling of these scores. The rating of Support and Promotion of Developing Infrastructure, alone accounted for between 27% and 53% of the overall score. In every district, the high scoring HMPs on this single rating was selected for funding. It can be argued that the decision to apply weights to these specific measures excessively skewed the scoring to favor those elements related to the degree of cooperation and collaboration with the ME-CDC over other potentially important indicators more directly related to coalition leadership and performance, such as: Support of Public Health Infrastructure and Capacity to Service the District. The rating of Support and Promotion of Developing Infrastructure consisted of a single item that was rated on a 1 to 5 scale by two CDC Program Administrators. The concept was not defined sufficiently to support consistent and reliable measurement. Specific HMP attributes or activities associated with lower or higher ratings were not clearly defined. This lack of definitional clarity contributes to the overall subjectivity of this scoring domain. While it was explained that DL Q6 to Q10 (Support of Public Health Infrastructure) were reviewed and considered in assigning these ratings, it would have been clearer and more defendable to create a separate scale using the responses to DL Questions 6-10 and eliminate the program administrator rating completely or use it only in case of a tie. The scoring procedures were overly complex and convoluted by the use of aggregate scores, rankings, weighting of specific areas, and extra measures used as tie breakers. In each District, HMPs were rank ordered in each measurement domain with higher ranks denoting higher performance in that area. Weights were assigned to two measurement areas (i.e., Support and Promotion of Developing infrastructure and PO ratings) and the ranks were summed to obtain an overall score for each HMP. The HMP with the highest score in each district was then selected as a lead. The decision to use ranks rather than raw scores to derive a total score for each HMP restricted the degree of potential variability between HMP scores making

it harder to clearly distinguish differences between coalitions. An alternate scoring method would have been to simply calculate a total score for each HMP by summing the scores across all measurement area and using total scores rather than rankings for the PO and DL measures. This scoring would have likely eliminated the need for applying weights and potentially the use of tie breakers. In review of the raw Survey Monkey data containing the DO and DL ratings, it was discovered that DL responses to Q12 thru Q14 were missing for the Western District and that the entire group of questions (Q11 thru Q14) were excluded from the scoring of the HMPs in this District. This missing data means that HMPs in this District were evaluated based on different criteria than HMPs in all of the other Districts since all of the others did respond to Q12 to Q14. This raises serious concerns about the overall integrity and fairness of rating process. It can be argued that if these questions were included in the overall scoring, that it may have influenced the results of the selection process in this District.

Conclusions
The results of this review revealed that the process established to identify Lead HMPs had a number of shortcomings that, when taken together, lead to doubts about the overall integrity and credibility of the scoring system and the resulting selection process. While no evidence was identified to suggest that any scores were intentionally manipulated to influence the outcome of the selection process, the review identified the following concerns: A scoring methodology established that lacked robust and objective performance measures and relied on more subjective experiential ratings; Weighting of selected measures including, Support and Promotion of Infrastructure and PO ratings gave greater importance in the scoring to cooperation and collaboration with the ME-CDC and consequently lesser importance to other measures more directly related to a coalitions capacity to serve the district, leadership, and knowledge and experience to effectively support the development of the local PH infrastructure; Inconsistencies in the application of the complete scoring criteria in all Districts resulting from the failure to ask all of the DL Questions in one District.

These shortcomings substantially increase the potential for bias in the selection process and raise difficult questions about the overall integrity and credibility of the selection process.

Attachment 1: Questions Asked of Project Officers and District Liaisons


Each coalition was ranked on a score of 1-5, with 1 being the least and 5 indicating the most. Questions asked of Project Officers Collaboration with MCDC 1. Degree of cooperation with Maine CDC 2. Willingness and ability to follow Maine CDC guidance and direction 3. Openness to technical assistance from Project Officer 4. Facilitates engagement between coalition board and project officer 5. Staff of the HMP conduct themselves professionally Capacity to Serve the District 6. Degree to which addressing health disparities is a priority 7. Degree to which the HMP has served their entire service area Efficient Use of Resources 8. Effectiveness at implementing their work plans within the parameters given by Maine CDC 9. History of engaging capable partners in HMP service area

Questions asked of District Liaisons Collaboration with MCDC 1. Degree of cooperation with Maine CDC 2. Willingness and ability to follow Maine CDC leadership and direction 3. Engages district liaison in professional and collegial manner 4. Facilitates engagement between coalition board and district liaison 5. Staff of the HMP conduct themselves professionally Support of Public Health Infrastructure 6. Rate the understanding of the HMP regarding their role in the public health infrastructure 7. Degree to which the HMP has been positively involved in developing or supporting development of the public health infrastructure 8. Rate the contribution of the HMP to the development of the public health infrastructure 9. Degree of positive engagement in DCC and DCC activities 10. Rate the degree of flexibility of the HMP in allowing other public health entities to take a lead role in DCC and the public health infrastructure Capacity to Serve the District 11. Degree to which addressing health disparities is a priority 12. Completeness and integrity of MAPP implementation 13. Degree of achievement of intent of Core Competencies 14. Formation and effective functioning (independent of paid staff) of a governance or advisory board

Additional Questions Raised By HMP Scoring Process Why were three questions left unanswered by the District Liaison in the Western District? What was the rationale for deriving total scores by aggregating all DL and PO responses rather than creating separate scales for each of the content areas in the DL and PO surveys? What was the rationale for using overlapping but separate sets of questions to DLs and POs rather than keeping the content the same and grouping DL and PO responses by content area? Why were HMPs ranked on each measure and the rankings summed to derive a total score rather than summing all ratings and scores and then ranking each HMP based on a total score? Since performance measures were available based on the HMP Kit Monitoring System, why were they not used as part of the selection process other than for just a tie breaker? Why was it deemed necessary to have a separate rating scale for support and promotion of infrastructure rated by Program Administrators when there was already 5 questions on this content area rated by DLs? Why were these questions not included in the PO interview? It seems that using the percentage of costs associated with G&A might unfairly penalize smaller HMPs.

You might also like