Evaluation Toolkit for Breastfeeding Programs and Projects

June 2012

3.3 Establishing purpose and key evaluation question(s)

Page last updated: 04 November 2013

Being clear about the purpose of the evaluation is essential. Is it to meet the needs of a funding organisation? Is it to provide evidence to advocate for an expansion of services to a funding body? Is it for your own purposes, to improve the work you do? Is it to find out why something isn’t working as well as you think it should? Is it for a combination of purposes?
The most important part of any research is defining the research question. The question(s) you ask will determine what you do, how you do it, and who you will tell about it.
Evaluation usually looks at one or more of four areas:

  • effectiveness – is our service making a difference?
  • appropriateness – are we providing the right service to meet the needs of service users?
  • efficiency – are we making the best use of our resources?
  • quality – are we providing the best service we can?
Good quality health care is based on seeing the health service from the user’s perspective (Berwick 2002). The Institute of Medicine in the United States has identified six domains of quality health care, based on the experience of the service user:

“Safe - avoiding injuries to patients from the care that is intended to help them.

Effective - providing services based on scientific knowledge to all who could benefit and refraining from providing services to those not likely to benefit (avoiding underuse and overuse, respectively).

Patient-centered - providing care that is respectful of and responsive to individual patient preferences, needs, and values and ensuring that patient values guide all clinical decisions.

Timely - reducing waits and sometimes harmful delays for both those who receive and those who give care.

Efficient - avoiding waste, including waste of equipment, supplies, ideas, and energy.

Equitable - providing care that does not vary in quality because of personal characteristics such as gender, ethnicity, geographic location, and socio-economic status.” (Institute of Medicine 2001:6,7).

A client focus is particularly relevant for breastfeeding services, given the intimate nature of breastfeeding, the complexity surrounding the initiation and continuation of breastfeeding for mother and baby, and the individual nature of each woman’s experience of breastfeeding.

3.3.1 Identifying the questions

Using these domains, here are some ideas for aspects of breastfeeding services which might usefully be evaluated:

Safety – what proportion of mothers report adverse experiences such as mastitis or nipple trauma? What proportion have absolute contra-indications to breastfeeding or risk factors that require extra guidance? Do staff have adequate training to assist families to make infant feeding decisions in the best interests of mother and child?

Effectiveness – did the service influence mothers’ breastfeeding decisions? How long do the women attending our service continue exclusive or any breastfeeding?

Patient centred-ness – how satisfied are women with the service we provide? Did women feel that staff had listened to them and respected their individual circumstances and goals?

Timeliness – how easy is it for women to get an appointment when they need to speak with a midwife or lactation consultant?

Efficiency – what is our throughput? How long do women have to wait to be seen?

Equity – who might need our services but not be able to access them? Are we addressing the needs of the local population? Were specific priority groups well targeted?

Further evaluation questions will flow from establishing the reasons you are commencing an evaluation. They will rely on the scope of your evaluation and whether the whole program or just a part of it is to be evaluated.

Evaluation questions may include the following:
  • What processes are or were undertaken as part of the program? (e.g. individual counselling, mothers’ groups, clinical support, distribution of written materials, staff training etc. Did the program use any established protocols or pathways? Who is using the service and were there any access problems?)
  • Did these vary from those originally planned, and what were the reasons, if any, for the change?
  • What are the lessons learned as a result of the evaluation?
  • How could the program be improved (if it is ongoing) or what lessons could be applied to a new program?
  • What were the outcomes of the program?
Outcomes are such an important focus for evaluation that they are defined further below.

3.3.2 Defining outputs and outcomes

It may help to think of outcomes in terms of what the results of your activities or project might be; these are sometimes called ‘outputs’ when the results are early in the project and ‘outcomes’ when they come at the end of the project. Outputs are really stages on the way to your final outcomes, so they can be called early achievements or outcomes, but the word ‘outputs’ signifies that they are milestones or activities that are intended to lead to a bigger outcome, which should be a tangible and measurable benefit such as improved information-sharing or reduced waiting times. You might put it this way: your efforts (‘inputs’) will lead to products (‘outputs’) that can lead to a result (‘outcome’); the outputs will be practical items or systems, such as a new training manual, or an improved reporting system, and the outcome will be something that happens as a result of the cumulative effect of the outputs (or as they are sometimes called, ‘early outcomes’). The program logic evaluation diagram at 3.5.1 shows the way in which inputs (activities) will lead to outcomes over time.

For example, staff could create an on-line breastfeeding education module (output), leading to mothers receiving more consistent breastfeeding advice (short term outcome), followed by increases in breastfeeding rates and satisfaction with the service received (medium and long term outcome). You might then ask some additional questions about the outcomes:
  • Were these the intended outcomes of the program?
  • Were there any unintended outcomes?
  • What factors assisted or inhibited the achievement of outcomes?
Once your key purpose for the evaluation is agreed, and some of your broad evaluation questions are developed, you can begin to develop the evaluation methodology. Before you do that, however, it’s a good idea to take a look at your stakeholders, including service users and policy makers, to make sure that your evaluation will be able to take account of others who might have an interest in or a perspective on the evaluation.