Theory-driven & process evaluation: The art of getting inside and beyond the “black box”
by Jill Scheibler
Before working in program evaluation, I received education and training as a clinician, specifically as an art therapist. Through my work as an art therapist, which was based in a personal belief in and, more importantly, empirical observations supporting the mental health-promoting effects of making art, I became curious about how to demonstrate arts impacts to the general public (including dubious funders and policymakers) and found a lack of relevant research to back up what I’d seen in practice. At the same time, I talked to numerous art therapists and community artist-practitioners who were doing good work with vulnerable populations in my city of Baltimore, all around the U.S., and throughout the world. The individuals I talked to all voiced their need to “prove” the value of their work, but many were unsure that research could do justice to the creative process, and were leery of evaluators who might try to force fit what they do “into little boxes”. I’ve even heard it said (more than once!), “There’s no way to measure what we do!” On that sweeping point, I have to respectfully disagree with some of my art world colleagues.
After having supported program evaluation projects for some time now, I can understand the common wariness, held by many practitioners of all sorts, about evaluation’s ability to capture the richness and complexity of their work. (And granted, depending on the nature of the evaluation project, achieving that lofty goal may not even be the point!) However, a program’s very survival may rest on its ability to demonstrate that its interventions are high-quality and cost-effective, and program stakeholders have increasingly sought these assurances. That program quality can and should be measured is something that most of us might agree on, but the issue seems complicated by the continued hold of “outcomes-oriented” thinking on even the most unconventional of community artists, constraining ideas of what “measurement” means and offers and forcing a focus on outputs before the intervention design itself is clear. More broadly, a tenacious, prevailing focus on outcomes measurement continues to dominate stakeholders’ expectations of program evaluation, even though this does not either assess or inform better development of programs’ theories of change, which actually allow one to look inside the “black box” of intervention effectiveness and help one to assess quality.
Evaluations known as “black box,” or “input-output evaluation,” have the primary goal of assessing the relationship between intervention and outcome. They do not systematically evaluate change processes that turn interventions into outcome but seek out information about a program’s merits. If evaluators and stakeholders, including practitioners, need to understand the merits of a program and how processes can be tailored to improve the intervention then another evaluation strategy, such as theory-driven and process evaluation, is a better choice (Chen, 2005). Such efforts allow for a more in-depth examination of program components to show which areas, have been more or less effective (Harachi, Abbott, Catalano, Haggarty, Fleming, 1999; Linnan & Steckler, 2002). Looking inside the “black box,” in order to get beyond it, still recognizes the role of outcomes measurement but also examines implementation fidelity and other issues to determine if it is an intervention or entire program, or merely aspects of it that actually succeeded or failed.
Process evaluation serves an important role for program evaluation both when interventions produce significant outcomes and when interventions do not produce intended impacts. When outcomes are significant it is important for stakeholders to have some way of knowing which intervention components actually contributed to the outcomes; when outcomes are not significant, process evaluation can help explain why they were modest or insignificant (Linnan & Steckler, 2002; Susser, 1995). Programs can also learn whether or not their theories of change clearly specify the intervening processes or mechanisms that link activities to intended outcomes. Such evaluative information can be readily applied by practitioners, and it is just such information that is most needed by programs right now, including community arts programs. It also can contribute to a broader body of research about social impacts of the arts that will make the adoption of more useful and appropriate outcomes-oriented measurement possible for this field in the future
Chen, H.T. (2005). Practical program evaluation: Assessing and improving planning, implementation, and effectiveness. Thousand Oaks, CA: Sage Publications, Inc.
Harachi, T. W., Abbott, R.D., Catalano, R.F., Haggerty, K.P., & Fleming, C.B. (1999). Opening the black box: Using process evaluation measures to assess implementation and theory building. American Journal of Community Psychology, 27, 715–735.
Linnan, L. & Steckler A. (2002). Process evaluation and public health interventions: An overview. In Steckler, A. and Linnan, L (Eds.), Process evaluation in public health interventions and research (pp. 1-23). San Francisco: Jossey-Bass Publishers.
Susser, M. (1995). Editorial: The tribulations of trials—Interventions in communities. American Journal of Public Health, 85, 156–158.