Inciter | methods
14
archive,tag,tag-methods,tag-14,ajax_fade,page_not_loaded,,select-child-theme-ver-1.0.0,select-theme-ver-1.7.1,smooth_scroll,wpb-js-composer js-comp-ver-4.11.2.1,vc_responsive

“Can I Have a Moment of Your Time?” Overcoming Survey Burn Out by Showing Value

“Please be sure to go to the website on the bottom of the receipt to fill out our survey!”  If you’ve ever gone grocery shopping, eaten fast food, or shopped at a major retailer, you’ve heard these words spoken by a cashier at some point. In the age of big data, seemingly no venue is immune from solicitations to take a survey of some sort, be it online or in-person.  With this oversaturation of survey propositions, the question for the consumer then becomes: what’s the value of actually completing this survey – is it really worth my time? And as evaluators faced with this situation, in which our potential survey respondents are already feeling burnt out (and even more so if they’re part of an over-researched community) the question is...

Read More

Tips & Tricks for Child Focus Groups, Part 2

by Mandi Singleton  (Note: this post is the second part of a two-part series.) As I mentioned in the my last blog post, one of my favorite things about my job at CRC is conducting focus groups. Focus groups with elementary school students can be the most challenging and the most fun for me as a focus group facilitator. Here in part two of my discussion of tips & tricks for doing focus groups with kids, I get into strategies that make for effective and enjoyable groups. 5. Make it fun with hands-on-activities! Studies show that incorporating hands-on activities in focus groups with school-aged children increases participation and stimulates discussion.In focus groups I've conducted, I led children in several hands-on activities as part of data collection. During one activity, children were given...

Read More

AEA 2014 Recap

by Mandolin Singleton Last month I attended the 2014 American Evaluation Association conference in Denver, CO. 2014's conference theme was “Visionary Evaluation for a Sustainable, Equitable Future.” The event brought together research and evaluation professionals from all over the globe and from a variety of disciplines (e.g., community psychology, health and human services, PreK-12 educational evaluation). Attendees were encouraged to explore ways in which evaluation could be used to support sustainability and equality across disciplines and sectors.  This year’s conference was especially exciting (as well as nerve-wrecking) for me because I was attending as a first time conference presenter. I went to numerous sessions, learned a lot, and had a great time connecting with other evaluators. (I even found a little bit of time to explore Denver’s spectacular shopping scene). Below are...

Read More

Looking back and looking forward

Carson Research has had an exciting and productive 2012, and we're thankful for our clients, friends, families, and evaluation colleagues, who were an integral part of our successes this year![caption id="attachment_159" align="aligncenter" width="300"] The CRC Team, 2012[/caption]Of all the developments that have taken place, one of the most noticeable is the growth of our team, and the diversification of our skill set that has come along with our new team members.Looking towards the New Year, we'd like to share how each CRC staffer chose to complete the following statement: Next year, I plan to enhance my evaluation skills by...

Read More

Social Media and Evaluation

I must admit I’m excited about today’s post. Not because it gives us an excuse to indulge ourselves in a lot of unfocused social media (e.g. facebook, LinkedIn, or Twitter) fun, but because of the opportunities and uses these tools can provide program evaluators. Not only have these platforms have provided us, as evaluators, with greater ease in gleaning resources (such as through the American Evaluation Association’s facebook page) and communicating with clients and colleagues (via Twitter and our local evaluators’ LinkedIn group), but we’ve begun to see programs’ use of these platforms as an important piece of their evaluation “stories”.Social media allows for connections that are rapid and have the potential for wide dissemination. It isn’t easy to envision programs advertising their services through social media, but it is...

Read More

Experimental Design versus Applied Research Design in Evaluations

Experimental design, a major component of pure (i.e. basic) research is considered the gold standard for research. The premise of experimental design is that a group of participants are randomly assigned to treatment or intervention groups. This random assignment is intended to limit differences between groups. Additionally, the participant and/or experimenter are often blind to which group the participant belongs. With this type of design, you can effectively compare the outcomes across groups at the end of a program. Presumably, the group that received your intervention will show the expected outcomes, while the group that didn’t receive the intervention will not. Any conclusions drawn by the experimenter that the intervention worked or didn’t work should be taken as concrete because all other factors that could influence the outcomes were controlled...

Read More

The Success Case Method

If you want to know if your program's participants mastered the objectives of the program, the Success Case Method might be for you. (See this report for a summary of this method: http://tinyurl.com/successcasemethod). This approach involves focusing on those individuals who were either particularly successful or particularly unsuccessful at learning your program's objectives.The approach is very purposeful, in that you don't select a random sample of participants; you go to participants at both ends of the learner spectrum to gather information. It might seem odd to not focus on the average learner, but not so. Focusing on the extremes can offer you much more specific information that is likely to help the average learner as the goal is to seek out successes and failures and rigorously describe the story of...

Read More

Participatory Analysis

A report released by Public /Private Ventures in March 2011 titled "Priorities for a New Decade: Making (More) Social Programs Work (Better)," discussed a critical problem with the evaluation process for non-profit programs:Often times, evaluators do not collaborate with a program and therefore programs are passively evaluated. In addition, funders may not ask for the right evidence and an often impractical report is usually produced months later. This gives the non-profit no voice in the evaluation process and no time to make adjustments or improvements in their program.The authors recommended that evaluators collaborate with the program staff (i.e. the stakeholders) in the evaluation design, and that the stakeholders are provided real-time, actionable feedback that allows the program to improve its effectiveness. They believe that evaluators need to help non-profits better...

Read More

Successful Use of Mixed-Method Design for Project Evaluation

A number of our evaluation projects are community based, and at times grants are funded to unite community agencies, so they can work more closely together to achieve their goals. How do you determine how well organizations are collaborating? How do you improve their collaboration? As a result, we're always looking for evaluation tools that are straightforward and provide complete, easily interpretable results.In their 2009 study, Cross and colleagues1 evaluated interagency collaboration using a mixed-method design, which is not an easy task. They approached it from a variety of perspectives, and incorporated qualitative and quantitative data that included network analysis. To completely evaluate this mixed-use approach, they:Held focus groups (qualitative) to determine agency classifications and linkagesCollected ratings of linkage (quantitative): networking, alliance, partnership, coalition, collaboration, and no contact.Combined the information...

Read More

Focus groups

Ahh..the focus group! This is perhaps one of the most well-known evaluation methods. What makes focus groups so popular? First, a focus group is typically a small group of people (<10) who are guided through a structured conversation by a facilitator (likely the evaluator in this case). The evaluator will work with stakeholders to identify who should be a part of the focus group, the purpose of the focus group, and what questions should be asked.A strength of focus groups is that they are often a low cost approach that allows group members to provide information about at topic in a way that the resulting information will likely be richer than if only a single person was interviewed. However, focus groups can't be used for pre/post comparisons or when confidentiality...

Read More