Program Evaluation Starts at the Beginning – Not at the End

In the March 2019 residential block of the Heath Leadership and Management Program at the Solomon Islands National University we worked from the notes prepared by Emeritus Professor Arie Rotem.

Program evaluation starts at the beginning – not at the end.

Previously we implemented health programs and then thought about how to evaluate them, which we usually did quite poorly based on indicators that we had not properly thought through.

When we prepare for an intervention, we need to think deeply about how to evaluate it before we even start. We need to find the indicators that inform us on the progress and outcomes of our work.

In small groups we studied a generic ‘Melanesian Scenario’ and identified a problem of insufficient coverage of antenatal services.  We drew a cause-and-effect map showing all the variables that influenced mothers’ access to ANC services and the relationships among these factors.  We identified the stakeholders and their major concerns, their degree of interest in ANC coverage and their power to make a difference.

We then considered the questions needed to focus the evaluation on, and the best entry points for the intervention from the stakeholders’ perspectives.  We practiced developing good evaluation questions on the methods, feasibility, outcomes, and impact of alternative interventions, identifying the assumptions we would make and the risks that incorrect assumptions pose when based on our own perspectives, rather than those of the stakeholders.

We identified short-term and long-term indicators to keep the intervention on-track to achieving the objectives of the evaluation questions.  We considered the sources of information and the most appropriate methods for collecting data, according to the capacities and understandings of the stakeholders, and how we will report the outcomes to them.  We listed the resources needed to implement the intervention, knowing that if we don’t identify them in our Annual Operational Plans & Budgets they won’t be funded.

All of this is needed before we start.

How different is this to the way we operated in the past, where we had a broad idea of the objectives and used indicators handed down to us and with very little engagement with key stakeholders.

We did what we thought was ‘our job’ not what was needed by the community.

This difference typifies the problems that clinicians have when they are given leadership and management responsibilities for which they are not prepared.  The tendency has been to passively accept the tasks and indicators given to us by program designers and without thinking them through for their application in the contexts and communities in which we work.

This approach to thinking through evaluation at the start of an intervention – not at the end – gives us a way to be more active leaders and managers by involving our teams in much deeper thinking in the early stages about what we are trying to achieve, and to focus our work and interventions on measurable and meaningful outcomes.

 

Vahimolo, N Pego, M. Ifumae, C. Qalo, R. Maegerae, J. Harara, M. Gavira, B. Kiokimo, H. Orihao, A. Simbe, E. Thoa, R. Lebo, J. Denty, E. Titiulu, L. Irobaea, L. Bunabo, Graham Roberts

Leave a Comment

Your email address will not be published. Required fields are marked *