Do We Really Need to Write (and Read) another Lengthy Evaluation Report?
Funders love an evaluation report. They especially love well-written reports supported by rigorous studies and accurate information based on the right indicators. This takes resources. And not always the funder’s. It takes up a huge chunk of most evaluation budgets, as well as significant time from program staff who labor over the drafts of each evaluation report to make sure it portrays their efforts accurately and compassionately.
And the typical result? A long academic report that no one has time to read. It mimics a journal article, with a literature review, problem statement, description of the evaluation methodology and results, and implications. Often the report restates information that is known to all audiences, such as the program model, the history of the organization, and the literature that supports the intervention.
Once the report is submitted, it’s unclear what the result is of all that hard work. Perhaps the funder skims the graphs or reads the executive summary. If they have the bandwidth they might call a meeting to discuss the implications. We have even seen funders ask for an Executive Summary, and then a one-pager, and then a few bullets that can be part of a slide deck. But often, by the time the evaluation has been conducted and the report written, edited, edited, edited (again), and laid out, so much time has passed that the findings are old news anyway.
The Times Have Changed
Organizations are able to collect data and glean it for insights faster than ever before, with fewer resources, if they have the right tools. And yet nonprofits continue to be tasked with providing long-form reports to funders that don’t provide substantial benefits to the programs themselves.
Organizations produce the long-form report because funders require it. Evaluators create them because organizations ask them to. And funders want to know that their resources were effectively spent. But is the long-form report the best way to do that?
Traditional research and evaluation approaches often unintentionally gate-keep data needed to improve services. They store data in systems that are difficult to access (like SPSS or R), delay sending results to end-users while writing lengthy reports for funders, and then provide stale findings. Precious resources have been wasted.
No one wants to admit they don’t read evaluation reports - it’s the elephant in the room. But everyone wants to gain insights from their data. This is true whether it’s quantitative data or the results of interviews and focus groups.
What’s the Alternative?
What would happen if no one ever wrote another 30… 50… 90 page (!) evaluation report? What would we replace it with?
Stay tuned for our next post in the series….and we will show you a better alternative.