|

AEA 2013 Re-Cap, Part 1

CRC was well represented at the [2013 American Evaluation Association conference, held close to home for us in Washington, DC!We learned a lot from this years sessions and had a great time connecting with old and new friends But we didnt just party there! (Although we did do _just a bit_ of that with our fellow East Coast evalutors see the evidence at the end of this post.)


Most of our staff attended AEA 2013, and each has something to share about what she learned there. Together we have so much to share, actually, that well be splitting our conference post-mortem into two parts. _**Stay tuned for part 2, coming next week!**_

__Dana Ansari, Research Assistant,__ attended a plenary presented by John Easton, entitled, "The Practice of Educational Evaluation Today: A Federal Perspective". Her key take-aways from the presentation were:____

1. Working partnerships between evaluators and stakeholders are important for making research both pertinent and functional.

2. Formative evaluations are useful in gathering feedback and identifying a programs strengths and weaknesses, which can then be used to improve future implementation efforts. Randomized controlled trials are useful in making casual linkages; however, they may sometimes lack the ability to capture the reasons behind the _why_ and the _what _of a given intervention makes it so effective.

3. Drawing from various research and evaluation approaches can help evaluators choose the most effective method for program improvement and success

__Leslie Gabay-Swanston, Research Analyst,__ distilled a couple of take-aways from different sessions that she attended:

_Number one_, was that a distinction should be drawn between assessment and evaluation:

Assessment = _What_ do we know?

Evaluation = _How_ do we know?

Evaluation = _How_ do we know?

Within a circular process, assessment and evaluation use the same elements, just in a different way.

_Number two_, was a set of useful distinctions between evaluation types:

Collaborative Evaluation =  Evaluators are in charge; there is ongoing engagement between evaluators and stakeholders.

Participatory Evaluation = Control is jointly shared; participants are very involved in the evaluation process.

Empowerment Evaluation = Participants are in control of the evaluation; the evaluator is a critical friend. Related to empowerment evaluation, an ongoing challenge for evaluators is how to help participants to be comfortable and confident enough to carry the evaluation forward.

Michael Quinn Patton (aka, Sheila’s “best friend, MQP”) presented this year, as he often does.


**Mandi Singleton, Research Assistant**, attended a workshop entitled 21<sup>st</sup> Century Strategies for Conducting Excellent Interviews.  It provided pointers for conducting long interviews, presented the concepts of companioning and motivational interviewing (MI). Mandi learned that:

1. Companioning involves practicing effective listening skills that aim to increase the quality of responses. This process focuses on what virtues you  as the interviewer bring to the table (e.g., being aware of your own biases, respecting the interviewee, maintaining focus, practicing open-mindedness, non-verbal communication/body language, interest and engagement). The interviewer should exude: 1) compassion  to actively engage the interviewee, and 2) detachment  understanding the interviewee while not taking on their emotions.

2. Motivational interviewing is powerful in combating cases of resistance (i.e., a lack of agreement on goals between interviewer and interviewee) in participants. In MI, interviewers should focus on the dimensions of:

  * Collaboration rather than confrontation (e.g., engage as partners; dont confront as to how they should change)

  * Evocation rather than education (e.g., evoke from participant, dont push them to say)

  * Recognizing participants autonomy rather than expressing your authority, making them the agents of change and experts of their own situations

Mandi also picked up a few pointers on how to increase participant engagement and reduce dropout when conducting long-interviews:

1. Consolidate

2. Focus on relationships and building trust with the interviewee

3. Be clear (on time and content); transparent

4. Clarify own goals to get richer data (focus on quality vs. quantity)

5. Avoid leading (leading interview to get the answers you want)

6. Leave space for open-ended questions

7. Break-up sessions to reduce interviewee fatigue

8. Provide incentives

9. Be sensitive to timing, make it convenient for participant

10. Create buy-in, explain to participant how it will benefit them

**We hope the first part of our AEA 2013 re-cap was informative for you! And now, for our "happy snaps":**

Our good buddy and collaborator, Nichole Stewart!


CRC’s own Leslie, Jill, & Taj

A happy group of NYC evaluators!

East Coast evaluators were willing victims of Sheila’s camera.

Chris Lysy and Stephanie Evergreen- always keeping evaluation visually interesting!

Stephen Axelrad and Taj

More willing victims for Sheila’s camera and delicious wine.

Let’s work together!

Most nonprofits spend days putting together reports for board meetings and funders. The Inciter team brings together data from many sources to create easy and effortless reports. Our clients go from spending days on their reports, to just minutes.