One of my interests is the ways in which program evaluation methodologies reflect and create assumptions about the social change process and how this then affects program planning and implementation. The evaluation methods most often employed in development communications interventions often rely on outcome metrics that tend to be short term and focused on individual level of change and therefore do not accurately capture the process of social change at collective and community levels. Clearly, for projects such as Mobile Voices that aim to affect community level processes, alternative evaluation strategies are needed.
As part of the preparation for my quals and future research, I am working on compiling my thoughts about the limitations of the conventional techniques (such as Knowledge-Attitude-Practice studies or outcome assessments based on Logical Framework Analysis). I also want to survey and critique existing alternative evaluation methodologies. I would like to use Mobile Voices as a lens through which assess these methods.
What I am working on now is a summary of the field of alternative evaluation and participatory methods that will describe the methodology, its pros and cons, the kinds of projects for which it is most appropriate and its potential applications to Mobile Voices. I aim to have a very initial stage of this completed before I leave for break, and intend this to be a document I expand on and build on throughout the beginning of next semester, adding more methodologies as well as more detail on the ones already included. I have two ideas for potential outputs from this: 1. A scholarly paper on alternative and participatory evaluation methods using Mobile Voices as a case study 2. An evaluation methodology drawing from a mix of different alternative evaluation techniques that we feel best suits the Mobile voices project and that can be included as part of the “kit” that we create with the Mobile Voices tools. I am envisioning this as a tool that can help the Mobile Voices teams conduct their own evaluations. Thus, as the project scales up, there will be an evaluation template common to different project sites that both allows project sites the flexibility to customize their eval methods but also provides data that is comparable across sites. This should hopefully help us to assess the effects of the project at a variety of levels: at the level of the individual level, at the level of each site that adopts the tools and at the larger community level.
Interviews: Based on the interview questions that Cara, Carmen, Melissa and I put together for the PCT, I created the interview protocol for the organizer interviews. I did the first interview with Amanda. This also served as a pilot test for the questions and a chance for Amanda to give feedback on the questions before using them with others. The questions worked well, although the interviews have been semi-structured and the questions have served mostly as guides. So far I interviewed Amanda and Frank from Rise. Cara joined the interview with Frank and took really detailed notes which should be shared with the group soon. We scheduled an interview with Raul that Cara will conduct and we are still waiting to hear back from Pedro about his availability. The interviews were useful and interesting. We need to line up transcription services for the interview with Amanda and possibly for the two interviews still to be done, since we will likely not have the benefit of a second interviewer to take notes for these remaining interviews.
Lo siento, pero yo hablo espanol solamente un poco y no puedo traducir o escribir en espanol bien. Voy a practicar y algun dia quiza voy a poder escribir en espanol – did that make sense?? 🙂