Inciter | Blog Masonry
40572
blog,paged,paged-8,ajax_fade,page_not_loaded,,select-child-theme-ver-1.0.0,select-theme-ver-1.7.1,smooth_scroll,wpb-js-composer js-comp-ver-4.11.2.1,vc_responsive

Looking back and looking forward

Carson Research has had an exciting and productive 2012, and we're thankful for our clients, friends, families, and evaluation colleagues, who were an integral part of our successes this year![caption id="attachment_159" align="aligncenter" width="300"] The CRC Team, 2012[/caption]Of all the developments that have taken place, one of the most noticeable is the growth of our team, and the diversification of our skill set that has come along with our new team members.Looking towards the New Year, we'd like to share how each CRC staffer chose to complete the following statement: Next year, I plan to enhance my evaluation skills by...

Read More

A Look Back at AEA 2012

It's already the end of November, and we’re entering that time of year when people are inclined to look back and reflect on the events of the previous months. At CRC we are not quite ready to reminisce about all of 2012 just yet, but before November ends we do want to revisit an important event for us that occurred a few weeks ago. At the end of October, some of the CRC staff flew out to Minneapolis to attend the American Evaluation Association conference (and to see our first snowfall of the season!).If you follow CRC on twitter and/or facebook*, you already know that CRC was busy at this year’s AEA! Leslie and Taj (on behalf of Jenn, too pregnant to fly) presented Using an Adaptive Research Design in...

Read More

Nonprofit Organizations and Outcome Measurement

by Sheila MatanoAn article by Lehn Benjamin in the September 2012 issue of the American Journal of Evaluation explored the extent to which existing outcome measurement frameworks are aligned with the actual activities performed by nonprofit staff to ensure positive outcomes for their clients.Benjamin’s analysis of numerous measurement guides revealed that existing outcome measurement frameworks focus primarily on program activities completed and the changes in the users as a result of those program activities. This highlights, rather overwhelmingly, that outcome measurement often misses important aspects of staff work, namely the direct activities they do with clients. It is this frontline work that is essential for helping staff build relationships in their communities that are paramount to positive program outcomes. Unfortunately in many cases this does not fully capture the work...

Read More

Evaluation Use

Use of an evaluation’s findings (i.e., lessons learned) and process use (i.e., evaluation use that takes place before lessons learned are generated and feedback initiated) are two of the clearest, simplest examples of the uses for evaluations. (Fleischer and Christie (2009) offer other examples, but recognizing they don’t have clear definitions, they won’t be discussed here.) By now there is much agreement that there is a great deal of useful information generated during the evaluation process itself, information that could increase involvement and learning.Instituting practices that foster involvement in the evaluation process will lead to increased evaluation use, right? This idea seems to be common sense, but why is it that common sense concepts are often hard to implement or forgotten all together?I’d offer that often common sense ideas sound...

Read More

Social Media and Evaluation

I must admit I’m excited about today’s post. Not because it gives us an excuse to indulge ourselves in a lot of unfocused social media (e.g. facebook, LinkedIn, or Twitter) fun, but because of the opportunities and uses these tools can provide program evaluators. Not only have these platforms have provided us, as evaluators, with greater ease in gleaning resources (such as through the American Evaluation Association’s facebook page) and communicating with clients and colleagues (via Twitter and our local evaluators’ LinkedIn group), but we’ve begun to see programs’ use of these platforms as an important piece of their evaluation “stories”.Social media allows for connections that are rapid and have the potential for wide dissemination. It isn’t easy to envision programs advertising their services through social media, but it is...

Read More

Understanding the “evidence” in “evidence-based” home visiting programs

A May 2012 New York Times Opinionator article reviewed the success of the Nurse-Family Partnership (NFP) home visiting program. NFP is a program in which registered nurses visit with first time, high-risk pregnant women throughout their pregnancy and early motherhood. These nurses teach the women the importance of prenatal care, talk with them about the childcare and child development, and work with the mothers on appropriate parenting behaviors until the child is 2 years old.The Opinionator article listed many of the impressive evaluation findings from years of research on NFP, including follow up studies showing that children whose mothers had gone through the NFP program were 58% less likely to be convicted of a crime at age 19 than those whose mothers had not been in the program. Reading about...

Read More

Experimental Design versus Applied Research Design in Evaluations

Experimental design, a major component of pure (i.e. basic) research is considered the gold standard for research. The premise of experimental design is that a group of participants are randomly assigned to treatment or intervention groups. This random assignment is intended to limit differences between groups. Additionally, the participant and/or experimenter are often blind to which group the participant belongs. With this type of design, you can effectively compare the outcomes across groups at the end of a program. Presumably, the group that received your intervention will show the expected outcomes, while the group that didn’t receive the intervention will not. Any conclusions drawn by the experimenter that the intervention worked or didn’t work should be taken as concrete because all other factors that could influence the outcomes were controlled...

Read More

Using Appreciative Inquiry for Evaluating Organizations

Typically our blogs focus on evaluation techniques that are specific to program evaluations. But what about the organizations executing the programs?  Is there a way to evaluate an organization with the goal of improving how it functions?Coghlan and colleagues (2003) suggest that the appreciative inquiry method can be a constructive approach to evaluating the function of an organization. Appreciative inquiry is used more often in the private sector, but is being seen more and more as an evaluation approach with applications in the public sector as well. In a nutshell, appreciative inquiry involves working with members of an organization to determine the aspects of their work that are going well, why they are going well, and what they would like to see more of.Notice I didn’t say “see less of.”...

Read More

The Importance of Interpretation

The concept of evidence-based policy was examined in a recent article by Pawson and colleagues (2011).The authors discussed the current trend of "evidence-based everything" and the impact this approach can have on policy making.They examined the example of proposing a policy banning smoking in a car when there are children present and the difficulty in providing conclusive evidence to support the policy.Pawson and colleagues highlight the ongoing theme of their article in the following Donald Rumsfeld quote:"There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we now know we don't know. But there are also unknown unknowns. These are things we do not know we don't know." Many times the "known knowns" in research are...

Read More

The Success Case Method

If you want to know if your program's participants mastered the objectives of the program, the Success Case Method might be for you. (See this report for a summary of this method: http://tinyurl.com/successcasemethod). This approach involves focusing on those individuals who were either particularly successful or particularly unsuccessful at learning your program's objectives.The approach is very purposeful, in that you don't select a random sample of participants; you go to participants at both ends of the learner spectrum to gather information. It might seem odd to not focus on the average learner, but not so. Focusing on the extremes can offer you much more specific information that is likely to help the average learner as the goal is to seek out successes and failures and rigorously describe the story of...

Read More