You are on page 1of 18

Measuring, evaluating, and

monitoring health, health systems,


health policies, and health equity:
Why and how?
Chris Bullen and Karen Bissell
Measuring health

Individual Health Public/Population Health

Vital signs of individual health Vital signs of population health


• Body temperature • Death, illness
• Pulse rate • Mortality
• Respiration rate • Morbidity
• Blood pressure
• Life expectancy
• Hospitalisation
• Quality of life
• Other measures?
Why measure public health?

“If you can’t measure it you can’t “We can only be sure to improve what Key areas for measurement
manage it” we can actually measure.”
Priority setting
Evaluating impact of policies
Comparing different populations
and regions
Identifying health needs
PUBLIC HEALTH

Dahlgren and Whitehead’s 1991 Model of health determinants


THE HEALTH
IMPACT
PYRAMID

FRIEDEN, 2010
What do we - and should we - measure?
• National health quality and patient safety indicators that support improvement of
health services in New Zealand, as articulated in the NZ Triple Aim Framework

• improved quality, safety and experience of care


• improved health and equity for all populations
• best value from public health system resource

https://www.hqsc.govt.nz/our-programmes/health-quality-
evaluation/projects/quality-dashboards/dashboard-of-health-system-quality/

• PROMS
Introduction to patient-reported health outcome measures (PROMs) - Bing video
NZ System Level Measures
• The System Level Measures (SLMs) Framework aims to improve
health outcomes for people by supporting DHBs to work in
collaboration with health system partners (primary, community and
hospital) using specific quality improvement measures. It provides a
foundation for continuous quality improvement and system
integration.
• NZ System Level Measures:
• are outcomes focused
• are nationally defined
• require all parts of the health system to work together
• focus on children, youth and vulnerable populations
• connect to local clinically led quality improvement activities and
contributory measures.
HEAT is a planning tool that improves the ability of mainstream health policies,
programmes and services to promote health equity. HEAT enables health
initiatives to be assessed for their current or future impact on health equity. The
HEAT Tool questions challenge users to think broadly about equity issues.
https://www.health.govt.nz/syst • 1. What inequalities exist in relation to the health issue under consideration?
em/files/documents/publication
• 2. Who is most advantaged and how?
s/health-equity-assessment-
tool-guide.pdf • 3. How did the inequalities occur? What are the mechanisms by which the
inequalities were created, maintained or increased?
• 4. Where/how will you intervene to tackle this issue?
• 5. How will you improve health outcomes and reduce health inequalities?
• 6. How could this intervention affect health inequalities?
• 7. Who will benefit most?
• 8. What might the unintended consequences be?
• 9. What will you do to make sure the intervention does reduce inequalities?
• 10. How will you know if inequalities have been reduced
What is evaluation?

Evaluation
= E-value-ation
= a systematic process of determining value

An evaluation needs to be able


to say whether something is
any good, or not, and why
Kate McKegg and Syd King S
Research seeks to prove.
Evaluation seeks to improve.
Michael Quinn Patton
Why do we evaluate?

Is our programme
working? Are there What would
ways to improve it? success look
like to you?

Is this Was the project


programme worthwhile?
value for Should we scale
money? it up?
Steps in an evaluation

Different from “research”:

o Role of stakeholders

o Focus on results being


use-able and used

Framework for Program Evaluation in


Public Health - United States CDC
Key Evaluation Questions

Overarching questions that structure your analysis of the “value” of


the programme to the stakeholders

1. What is this strategy, programme or project? Why was it needed, and


who is it for? What resources have been used to create, implement or
maintain it?
2. How well was the strategy, programme or project implemented and/or
delivered?
3. How effectively did the strategy/programme/project realise key results
and outcomes?
4. How valuable are the outcomes for participants, to the organization,
the community, the economy?
5. Overall how worthwhile is or was the strategy, programme or project?
6. What are the learnings that can be applied going forward?
Kinnect Group ‘Building Blocks’
Rubrics

How do we define how good is “good” and how good is


“excellent” when judging the value of a programme?

• Find out what key stakeholders value in


the programme.
• Use existing research, evaluations,
benchmarks.
→ Create rubrics These provide a clear, Rating Criteria

transparent, shared understanding of Excellent

what success looks like. They allow us to Very good


Average
“apply” the values identified by the
Poor
stakeholders.
Programme logic

Evaluating your programme (or intervention


or project or policy…)
Before you can evaluate your programme (or intervention or
project etc), you need to know….
how and why the programme is supposed to work

Making sense of evaluation: a handbook for the social sector


June 2017 NZ Social Policy Evaluation and Research Unit
Programme logic
A “programme logic” or “logic model” explains how the
programme is supposed to work.
It shows the theory behind your programme – if these actions occur,
then these changes will follow.
It shows the logic between all the elements of the programme.

Making sense of evaluation: a handbook for the social sector


June 2017 NZ Social Policy Evaluation and Research Unit
Programme logic

Output Outcome
Difference
between Other examples of outputs: Outcomes could be:
outputs and • Affordable houses built ??
outcomes? • Stop smoking advice provided ??

Making sense of evaluation: a handbook for the social sector


June 2017 NZ Social Policy Evaluation and Research Unit

You might also like