Treatment service users (TSU) project: phase two

Chapter 4: Evaluation aims and method

Page last updated: March 2011

4.1 Evaluation aims
4.2 Ethics
4.3 Developing the interview schedule
4.4 Capacity building
4.5 Sample
4.6 Recruitment
4.7 Interviews
4.8 Data analysis and evaluation framework

4.1 Evaluation aims

The research and evaluation component of the TSU Project: Phase Two aimed to evaluate the suitability and impact of consumer participation at the organisational level within various drug treatment settings, including pharmacotherapy, outpatient detoxification and residential rehabilitation programs.

The five services developed and implemented demonstration projects aimed at increasing levels of consumer participation at the organisational level. In-depth interviews were conducted with service providers and drug treatment consumers at each site prior to the pilot projects commencing and then again five to six months later. The two rounds of interviews were intended to provide the research team an opportunity to access whether the pilot projects led to changes in the level and quality of consumer participation at the organisational level, improved service delivery and improved communication between staff and consumers.

While the evaluation is intended to be explorative and focus on processes rather than outcomes, a model was developed to assess the demonstration projects across the five sites based on the available literature, advice from the Project Advisory Committee (PAC) and was informed by the findings of the TSU Project: Stage One. The model was used to develop interview questions and to provide a framework through which data was interpreted.

The relative short timeframe of the projects makes it difficult to explore their medium- and long-terms impacts and therefore the evaluation focuses primarily on readiness, implementation and short-term changes. Each demonstration project will be evaluated to determine the appropriateness and effectiveness of the consumer participation model, including:
  • The suitability of the model across a range of treatment contexts;

  • Identifying strengths and weaknesses of the model in a range of treatment contexts;

  • Identifying methods to improve the manner in which the model is practiced or delivered;

  • To evaluate the impact of the model of consumer participation in a range of treatment contexts;

  • Determining changes in attitudes of consumers towards providers and vice versa;

  • Determining changes in communication between consumers and providers;

  • Determining changes in communication among consumers;

  • Determining the extent of capacity building in relation to the consumer participation model for consumers and providers (knowledge, skills, confidence);

  • Identifying other direct and indirect outcomes of the project as perceived by consumers and providers; and

  • Estimating monetary and time costs of implementing the model.
Top of pageThe findings will be used to refine the model of consumer participation to ensure that it is a framework suitable for use in a diverse range of drug treatment settings. Further, the findings from this research will contribute to the development of consumer participation policies that can enhance service delivery and health outcomes.

4.2 Ethics

As this was a national project and involved government and non-government drug treatment services, a number of different human research ethics committees were responsible for reviewing and giving approval to the research component of the TSU Project: Phase Two. Ethical approval for the three nongovernment sites was granted by HREC of the University of New South Wales. The HREC of Melbourne Health granted approval for the government site in Victoria. South Eastern Sydney and Illawarra Area Health Service HREC-Northern Sector gave approval for the government site in New South Wales.

4.3 Developing the interview schedule

The primary method of data collection for the evaluation was in-depth interviews. An interview guide was drafted by the NCHSR research team in consultation with AIVL and the PAC. Additionally the content of the interview guide was shaped by the available literature on consumer participation and the model of consumer participation.

The interview guide was piloted with a project worker and consumer at NUAA in Newcastle, New South Wales. The primary purpose of the pilot interviews was to test the overall appropriateness of the content, structure and language of the guide and identify gaps or areas of interest that needed to be included. As a result of the pilot interviews, a number of new areas of investigation were added to the guide, including questions about power relations between staff and clients and the fit between clinical models of treatment and consumer participation.

The pilot interviews also identified some specific problems with language: For example, the consumer found the use of the word 'support' in the question 'What things do you think support consumer participation?' confusing, because in the drug treatment context support is usually associated with counselling and therapy. The consumer suggested that the guide needed to use everyday language: For example, 'What things do you think could make consumer participation easier?' These comments were used to refine the interview guide and ensure that the language used was accessible and meaningful to both service providers and consumers.

4.4 Capacity building

The pilot interviews also provided an opportunity for the AIVL peer interviewer to gain some hands-on experience with in-depth interviewing prior to going into the field. The TSU Project: Phase Two aimed to develop the research literacy of the AIVL staff on the project to ensure that consumer participation was embedded in the research process. The AIVL project worker also attended a workshop on qualitative research and in-depth interviewing conducted by experienced qualitative researchers at NCHSR. The workshop was specifically designed for peer interviewers and covered in-depth interview techniques as well as providing an overview of qualitative research.

The NCHSR researchers also gained significant knowledge from AIVL about working with drug users, the experiences of people in drug treatment and conducting research within drug treatment services. This type of intensive collaboration with affected communities is central to the research practice of the NCHSR and is endorsed through the NCHSR's statement on community engagement.

Data analysis processes also provided opportunity for AIVL staff to further develop their understanding of qualitative data management and analysis. The process for this is provided below.

4.5 Sample

Approximately 10 people were recruited and interviewed at each site at both baseline and evaluation data collection. In order to include the perspectives and experiences of a range of staff and clients, the research team actively targeted four groups within the drug treatment services:
  • Key staff such as team leaders, nursing unit managers, executive officers, and staff members directly involved in consumer participation activities;
  • Key consumers directly involved in consumer participation activities such as consumer representatives;
  • Staff not directly involved in consumer participation activities; and
  • Consumers not directly involved in consumer participation.
However, it should be noted that in many cases the baseline interviews did not include consumers or staff directly involved in consumer participation as the services did not have any consumer participation activities in place at baseline.Top of page

4.6 Recruitment

A key criterion of the selection process was the willingness of the services to facilitate and participate in the evaluation of the demonstration projects. While there was an expectation that key staff would facilitate the evaluation process, including assisting with recruitment, there was no expectation that individuals within each service, including key staff, should agree to be interviewed. Individual decisions to participate in the evaluation interviews were entirely voluntary. To ensure this was the case, different recruitment strategies were established for the four groups:
  • Key staff were sent an invitation letter by the research team;

  • Key staff were asked to approach key consumers and invite them to participate in an interview. Key consumers (e.g. consumer representatives) were integral to the implementation of the demonstration projects and therefore known to key staff. Key staff provided the key consumers with the contact details of the peer interviewer. Key consumers who were interested in doing the interview were able to contact the interviewer directly by phone or email, or on site when the research team visited each service;

  • Key staff were asked to distribute an invitation letter to all service staff. Staff were invited to contact the research team by phone or email or during site visits if they wished to participate in an interview;

  • Consumers who use the service, but are not directly involved in consumer activities, were also aware of the project via flyers displayed in the service. These consumers were able to contact the peer interviewer by phone or email. Alternatively, consumers visited the service on days the peer interviewer was on site and arranged a mutually suitable time to conduct the interview. Consumers were paid AUD $25 for their time and/or travel expenses.
We recruited separately for both baseline and evaluation data collection as we anticipated there would be changes in staff and consumers during the life of the demonstration projects, thus making it difficult for consumers and staff to participate in both rounds. Further, not all staff and consumers would have the time or desire to participate in both baseline and evaluation. While we expected there would be some people who chose to participate in both, this was not a prerequisite for participation in the study. We asked people who participated in baseline if they wished to be considered for evaluation interviews. If they were interested, we took their details and re-contacted them prior to the second wave of data collection to ascertain their availability and willingness to participate.

4.7 Interviews

Interviews were conducted by an experienced researcher from NCHSR and a peer interviewer from AIVL. The staff interviews were conducted by the NCHSR interviewer and the consumer interviews were conducted by the peer interviewer in collaboration with the researcher from NCHSR. Interviews were digitally recorded and transcribed verbatim. Transcripts were de-identified, removing personal names, names of health-care workers, health services and specific towns or suburbs.

4.8 Data analysis and evaluation framework

The demonstration projects were evaluated using qualitative methods and drew on a realist evaluation framework (Pawson, et al., 2005). Realist evaluation is a relatively new evaluation framework that acknowledges that effective programs and interventions are dependent on context and implementation as well as on the type or form of program or intervention being used (McEvoy, 2003; Pawson et al. 2005). Realist evaluation explores more process-orientated questions about services in order to explain what works about a particular program and in a specific context. The rationale for using this approach was that it provided rich and detailed data about the development, implementation and impact of organisational consumer participation. The evaluation focused on the specific context of each service and project to explore what worked, how it worked, what did not work well and why. While the project includes a diverse range of drug services, including location and type of treatment, the evaluation did not aim to explicitly compare the sites. Rather, the evaluation of individual sites was used to gather data to refine the model of consumer participation and inform the development of consumer participation policies that can be used across a diverse range of drug treatment contexts.

Transcripts from baseline interviews were transcribed verbatim. Transcripts were checked for accuracy against recordings and then de-identified (names and other identifying information removed). For a variety of reasons, evaluation interviews were delayed in most of the settings. These delays required the data management and analysis processes to be modified to ensure the timely completion of the project. Where possible, evaluation interviews were transcribed verbatim. Where this was not possible, a targeted transcription of the interviews was conducted. In these cases (typically interviews conducted with consumers), the AIVL worker listened to the recordings and selected sections of the interview to transcribe that were directly relevant to the evaluation analysis. For example, sections of the recording in which the participant described their own drug treatment journey and where there was no involvement in previous or current consumer participation processes were not transcribed.
Top of page
Baseline interview data was analysed and the results presented in a draft report. The data was read closely by the NCHSR researcher and a number of themes were identified as relevant to the research questions. These themes (with supporting quotes) were examined in a workshop involving NCHSR and AIVL staff. Each aspect of the thematic analysis, that is the interpretations and meanings drawn from the interview data, was critically examined.

The themes indentified from the baseline data were used to structure the analysis of the evaluation data. That is, data drawn from the four groups of participants were examined for patterns and shifts within each of these themes. A similar process occurred for the analysis of the evaluation data as for the baseline data. That is, NCHSR and AIVL staff examined each of the themes in a workshop format and critically examined the material drawn from interviews to exemplify the meanings made of the data. Following the approach to realist evaluation, the data was examined for findings within each drug treatment setting with attention paid to the specific local demands and resources, and for general patterns across the settings. To facilitate this, a summary was prepared of the progress and extent of achievements of the demonstration projects conducted in each site, including an analysis of the factors that facilitated or impeded the projects. This summary was drawn from both consumer and staff data.

Finally, a summary of the main findings was presented to the PAC along with a series of questions for review and discussion in a workshop format in order to inform the development of the key recommendations. A number of PAC members were unable to attend this final meeting and therefore an opportunity to provide written comments on the draft report and input into the key recommendations were offered to all members.