Inciter | Blog Masonry
40572
blog,paged,paged-8,ajax_fade,page_not_loaded,,select-child-theme-ver-1.0.8,select-theme-ver-1.7.1,smooth_scroll,wpb-js-composer js-comp-ver-6.0.5,vc_responsive

Reducing the Price of Hospital Readmission

by Tracy DusablonThe Hospital Readmission Reduction Program is part of the Affordable Care Act, which has ignited heated debates both for and against the program. The program aims to improve quality of care and lower costs by reducing hospital readmissions for Medicare patients. To accomplish this, hospitals are essentially ‘dinged’ when patients are readmitted within 30 days of discharge, and these ‘dings’ turn into financial penalties for the hospitals. As it currently stands, the penalty is one percent of hospital payments, and is set to increase to three percent by 2015.So, how are hospitals dealing with this new policy, which took effect in October 2012? To some extent, they may just be accepting the penalties, chalking it up to the expense of doing business. On a more constructive end, some...

Read More

How to Recommend Ending a Program

There is no need to skirt the issue—programs being evaluated have justifiable concerns around what an evaluation report may do to the future of their program. Now, if all stakeholders have been honest with themselves from the beginning, it won’t be entirely unexpected to learn that their program is or is not performing well. However, the evaluation report may be the very first time that the outcomes are displayed in such a concrete way. (We will assume that the evaluator meets all evaluation standards and provides a quality report; issues of ethics are for another post!)Eddy and Berry (2009) recommend following a simple heuristic. Essentially, the evaluator determines if the factors leading to program closure are flexible or immutable. If the factors are flexible then the evaluator can recommend changes...

Read More

Guest Blog: Moneyball & adapting to a data driven world

 by Meridith PolinThe role of the evaluator is much like that of Peter Brand in the  movie Moneyball (based on the book by Michael Lewis, a favorite author of mine). Peter Brand’s role as an economics whiz kid—   hired by the Oakland A’s— was to help them figure out how to win. Using meaningful statistics, Peter and the General Manager Billy Beane, helped turn the game of baseball on its head by looking at data in a new way. Similarly, evaluators are charged with  identifying and measuring the ‘bottom line’ of non-profit services from a social impact perspective. We have seen an explosion of businesses and non-profits talk about the use of data (like they do in Moneyball).But before anyone thinks about the analysis of data,   we (as...

Read More

Looking back and looking forward

Carson Research has had an exciting and productive 2012, and we're thankful for our clients, friends, families, and evaluation colleagues, who were an integral part of our successes this year![caption id="attachment_159" align="aligncenter" width="300"] The CRC Team, 2012[/caption]Of all the developments that have taken place, one of the most noticeable is the growth of our team, and the diversification of our skill set that has come along with our new team members.Looking towards the New Year, we'd like to share how each CRC staffer chose to complete the following statement: Next year, I plan to enhance my evaluation skills by...

Read More

A Look Back at AEA 2012

It's already the end of November, and we’re entering that time of year when people are inclined to look back and reflect on the events of the previous months. At CRC we are not quite ready to reminisce about all of 2012 just yet, but before November ends we do want to revisit an important event for us that occurred a few weeks ago. At the end of October, some of the CRC staff flew out to Minneapolis to attend the American Evaluation Association conference (and to see our first snowfall of the season!).If you follow CRC on twitter and/or facebook*, you already know that CRC was busy at this year’s AEA! Leslie and Taj (on behalf of Jenn, too pregnant to fly) presented Using an Adaptive Research Design in...

Read More

Nonprofit Organizations and Outcome Measurement

by Sheila MatanoAn article by Lehn Benjamin in the September 2012 issue of the American Journal of Evaluation explored the extent to which existing outcome measurement frameworks are aligned with the actual activities performed by nonprofit staff to ensure positive outcomes for their clients.Benjamin’s analysis of numerous measurement guides revealed that existing outcome measurement frameworks focus primarily on program activities completed and the changes in the users as a result of those program activities. This highlights, rather overwhelmingly, that outcome measurement often misses important aspects of staff work, namely the direct activities they do with clients. It is this frontline work that is essential for helping staff build relationships in their communities that are paramount to positive program outcomes. Unfortunately in many cases this does not fully capture the work...

Read More

Evaluation Use

Use of an evaluation’s findings (i.e., lessons learned) and process use (i.e., evaluation use that takes place before lessons learned are generated and feedback initiated) are two of the clearest, simplest examples of the uses for evaluations. (Fleischer and Christie (2009) offer other examples, but recognizing they don’t have clear definitions, they won’t be discussed here.) By now there is much agreement that there is a great deal of useful information generated during the evaluation process itself, information that could increase involvement and learning.Instituting practices that foster involvement in the evaluation process will lead to increased evaluation use, right? This idea seems to be common sense, but why is it that common sense concepts are often hard to implement or forgotten all together?I’d offer that often common sense ideas sound...

Read More

Social Media and Evaluation

I must admit I’m excited about today’s post. Not because it gives us an excuse to indulge ourselves in a lot of unfocused social media (e.g. facebook, LinkedIn, or Twitter) fun, but because of the opportunities and uses these tools can provide program evaluators. Not only have these platforms have provided us, as evaluators, with greater ease in gleaning resources (such as through the American Evaluation Association’s facebook page) and communicating with clients and colleagues (via Twitter and our local evaluators’ LinkedIn group), but we’ve begun to see programs’ use of these platforms as an important piece of their evaluation “stories”.Social media allows for connections that are rapid and have the potential for wide dissemination. It isn’t easy to envision programs advertising their services through social media, but it is...

Read More

Understanding the “evidence” in “evidence-based” home visiting programs

A May 2012 New York Times Opinionator article reviewed the success of the Nurse-Family Partnership (NFP) home visiting program. NFP is a program in which registered nurses visit with first time, high-risk pregnant women throughout their pregnancy and early motherhood. These nurses teach the women the importance of prenatal care, talk with them about the childcare and child development, and work with the mothers on appropriate parenting behaviors until the child is 2 years old.The Opinionator article listed many of the impressive evaluation findings from years of research on NFP, including follow up studies showing that children whose mothers had gone through the NFP program were 58% less likely to be convicted of a crime at age 19 than those whose mothers had not been in the program. Reading about...

Read More

Experimental Design versus Applied Research Design in Evaluations

Experimental design, a major component of pure (i.e. basic) research is considered the gold standard for research. The premise of experimental design is that a group of participants are randomly assigned to treatment or intervention groups. This random assignment is intended to limit differences between groups. Additionally, the participant and/or experimenter are often blind to which group the participant belongs. With this type of design, you can effectively compare the outcomes across groups at the end of a program. Presumably, the group that received your intervention will show the expected outcomes, while the group that didn’t receive the intervention will not. Any conclusions drawn by the experimenter that the intervention worked or didn’t work should be taken as concrete because all other factors that could influence the outcomes were controlled...

Read More