Evaluation Toolkit for Breastfeeding Programs and Projects

June 2012

3.5 Evaluation frameworks

Page last updated: 04 November 2013

There are a number of frameworks or models which can help people to plan an evaluation. Two of the most common evaluation frameworks are discussed below, plus a third, different model for exploring continuous quality improvement. We have included the latter because it can provide a starting point for teams who aren’t ready to do a full evaluation but would like to start a more formal quality analysis process. We have provided templates for each of these in Appendices B, C, and D at the end of this toolkit.

3.5.1 Program logic evaluation framework

Program logic theory is one approach which encourages stakeholders to develop a common understanding of how a program is intended to operate to achieve its objectives. In essence, a program logic is a linear model which draws a clear line from the need the program is seeking to meet, the activities undertaken to address the need and the intended outcomes. This may be depicted as follows:

Diagram of program logic model. For more detailed description please refer to the descriptive link next to the image. D


The program logic model seeks to identify the assumptions or evidence behind the program, its functions, aims and activities. So in the diagram above, the logic model may describe what people think ought to happen. An evaluation based on a program logic model can assess whether the intended outcome(s) actually did happen.

The program logic model is a generic one, which can be used to inform program planning. An evaluation framework which is built on a program logic model will start with those components of the logic model identified above, and then work through each step of the model to identify whether the assumptions on which the program were built were sound, whether the intended consequences of each logical step occurred and, if not, why not.

A program logic evaluation framework creates a matrix which will look something like the table on the following page:


Program logicExamples of broad evaluation questionsIndicatorsData sourcesTiming (sometimes this column is used for analysis method)
ActivitiesHow well were these activities undertaken? What worked? What barriers were experienced? How were these overcome?
Short-term outcomesWere the intended short-term outcomes achieved? If not, why not?
Medium-term outcomesWere the intended medium-term outcomes reached? If not, why not?
Long-term outcomesWhat evidence is available that the long-term outcomes will or can be reached? What has been learned about achieving these outcomes? What could be improved?
For example, your program might be a mother and baby clinic, with a number of related inputs and activities such as promotion and referral from other services. The activities might include providing individual assessment and support to mums, training for student nurses, and antenatal breastfeeding classes. A short term outcome might be an increase in the number of staff trained in breastfeeding support. Medium term outcomes might include increasing the number of mothers receiving consistent, evidence-based breastfeeding advice and support from the clinic. A long term outcome might be increasing breastfeeding rates in the catchment area of the clinic.

Further guidance to help you through the steps of designing a program logic evaluation framework for your evaluation is included in Appendix B.

The advantage of the program logic model is that it takes a linear, long-term approach to the assessment of a program; this can be easier for a program which has been established for a long time or which is funded for an extended period, where the needs are clearly defined and the objectives have been clearly stated.

The program logic model may not be as useful when a program has grown irregularly over time, and when a linear progression from need to long-term outcome cannot be traced. However, even if your program’s objectives have never been written down before, it may still be helpful to use the ‘logic-based evaluation framework’ template at Appendix B to identify and clarify the relationships between your program’s activities, outputs and outcomes.

As well as reviewing documents associated with your program, policies such as the Australian National Breastfeeding Strategy 2010-2015, and policy or strategic framework documents from your state or territory can assist you in identifying how the activities and outcomes of your program align with jurisdictional and national goals and objectives. The goals of the Australian National Breastfeeding Strategy 2010-2015 are listed in Appendix A of this Toolkit.

3.5.2 RE-AIM evaluation framework

Another form of evaluation framework is called RE-AIM, which is an acronym for:

Title: Diagram of RE-AIM evaluation framework - Description: Diagram showing the RE-AIM evaluation framework, with the acronym described in full as: "Reach", "Effectiveness", "Adoption", "Implementation", "Maintenance".

The RE-AIM framework was developed to assess the impact of public health interventions (Glasgow et al 1999), and is used widely within the health sector1 . The first two components, Reach and Effectiveness, can be considered as ‘user-focussed’ – has the program reached the people who could benefit from it? Is the program effective in addressing the identified health priority? The last three components, Adoption, Implementation and Maintenance, can be considered as ‘organisation-focussed’ – has the program been adopted within the broader health service? How effectively has the program been implemented? How well is the program being maintained so that it continues to achieve its objectives?

An evaluation framework based on the RE-AIM model might look something like the table on the following page.

Framework componentSample broad evaluation questionsSample indicators Sample data sources
ReachWho is using the service? How often? What services are women accessing? Is the service targeting the needs of women in the local area? Are there eligibility criteria, if so what are they and are they appropriate? Which particular ‘ages and stages’ of the breastfeeding experience are being targeted?Number of women accessing service and comparison to expected local population need

Demographic characteristics and comparison to local population

Service records

Local demographics

Interviews with local women

EffectivenessHow effective is the service in supporting women to breastfeed?Number of service users who continue to breastfeed at points in timeService records

Survey of service users

AdoptionHow is the service linked to other relevant services? How well do services collaborate or cross-refer? To what extent is the service fully embedded in the larger organisation?Number and type of collaborative agreements

Evidence of cross-referral or collaboration

Perceptions of stakeholders

Organisational records

Service documentation or clinical referral records

Interviews with key stakeholders

ImplementationHow was the service implemented? What were the barriers or enablers in establishing the service?Documentation regarding implementation

Perceptions of stakeholders

Organisational records

Interviews with key stakeholders

MaintenanceHow is the service funded? How sustainable is the funding? How is the service governed? What challenges face the service in meeting future community needs?Documentation regarding maintenance

Perceptions of stakeholders

Organisational records

Interviews with key stakeholders


The RE-AIM framework is useful when a ‘point in time’ assessment is sought regarding the effectiveness of a particular intervention. It is also useful because it focuses not just on the intervention but on the organisational structures and processes which support the intervention.

Further guidance to help you through the steps of a RE-AIM evaluation framework is included in Appendix C.

3.5.3 Continuous quality improvement framework

A final approach which could also be considered, particularly when services are just getting started in evaluation, is the continuous quality improvement (CQI) model. This is not an evaluation framework, but is a model for active monitoring and improvement over time. It could also be called a form of ‘action research’ in that it encourages staff participation in learning from, and improving, their own practice. The CQI model was particularly popular in health services in the 1980s and 1990s, and is based on industrial processes for improving efficiency and effectiveness pioneered by a man named William Deming.

The CQI cycle is a very simple cycle of four steps, which when followed by a team over a period of time can lead to substantial improvements in working practices. The four steps are: Plan; Do; Check; Act. The strength of this model is that it forms a continuous feedback loop so that people can be continually assessing and learning from what they do. This is illustrated in the diagram below.

Diagram of CQI cycle. For more detailed description please refer descriptive link. D


Description: Title: Diagram of CQI cycle - Description: Diagram of Continuous Quality Improvement Cycle: Plan - defining the problem, designing methodologies and data collection processes, gaining ownership;
Do - collection data, analysing data, monitoring processes;
Check - taking time to reflect on what the findings are, what lessons can be learned, what changes might improve the current situation;
Act - implementing improvements, maintaining high performance.


This process is particularly suited to involving a team in a collective self-improvement exercise. Al-Assaf & Schmele (1997:64) point out that quality improvement cycles are particularly helpful to: 1) find out more about the service users and their perspective, and 2) find out more about the ways in which health professionals work.

So, for instance a client survey can be used to identify ways of improving the service. A client survey may also be used in an evaluation, but the difference is that in CQI, action might be taken immediately to respond to what people said (‘that’s a good idea – let’s try it and see if it works’), whereas in an evaluation the survey would probably feed in to a larger process and the findings would be considered in the context of other evaluation activities before any decisions might be made to introduce changes.

The CQI model is different from an evaluation framework because the people involved are seeking to improve and change what they do as they go: they are learning while doing. The key difference is in the fourth step, where action is taken; this does not form part of an evaluation process where the action is taken after the evaluation is completed.

The advantage of using the CQI framework is that it can be easily implemented by team members with little resources, has a practical focus on service improvement, and can be embedded into a daily organisational routine. Its disadvantage is that it does not have the objectivity of an independent evaluation, and may lack capacity to deal with substantive topics without an additional investment of time and funding.

Further guidance to help you through the steps of a CQI evaluation framework is included in Appendix D.

1. A useful website which discusses the RE-AIM framework is RE-AIM website.