Inciter | AEA 2013 Re-Cap, Part 1
896
post-template-default,single,single-post,postid-896,single-format-standard,ajax_fade,page_not_loaded,,select-child-theme-ver-1.0.0,select-theme-ver-1.7.1,smooth_scroll,wpb-js-composer js-comp-ver-6.0.5,vc_responsive

AEA 2013 Re-Cap, Part 1

compiled by Jill Scheibler

CRC was well represented at the 2013 American Evaluation Association conference, held close to home for us in Washington, DC! We learned a lot from this year’s sessions and had a great time connecting with old and new friends… But we didn’t just party there! (Although we did do just a bit of that with our fellow East Coast evalutors… see the evidence at the end of this post.)

AEA_2

Most of our staff attended AEA 2013, and each has something to share about what she learned there. Together we have so much to share, actually, that we’ll be splitting our conference post-mortem into two parts. Stay tuned for part 2, coming next week!

 

Dana Ansari, Research Assistant, attended a plenary presented by John Easton, entitled, “The Practice of Educational Evaluation Today: A Federal Perspective.” Her key take-aways from the presentation were:

1. Working partnerships between evaluators and stakeholders are important for making research both pertinent and functional.

2. Formative evaluations are useful in gathering feedback and identifying a program’s strengths and weaknesses, which can then be used to improve future implementation efforts. Randomized controlled trials are useful in making casual linkages; however, they may sometimes lack the ability to capture the reasons behind the why and the what of a given intervention makes it so effective.

3. Drawing from various research and evaluation approaches can help evaluators choose the most effective method for program improvement and success

Leslie Gabay-Swanston, Research Analyst, distilled a couple of take-aways from different sessions that she attended:

Number one, was that a distinction should be drawn between assessment and evaluation:

Assessment = What do we know?

Evaluation = How do we know?

Evaluation = How do we know?

Within a circular process, assessment and evaluation use the same elements, just in a different way.

Number two, was a set of useful distinctions between evaluation types:

Collaborative Evaluation =  Evaluators are in charge; there is ongoing engagement between evaluators and stakeholders.

Participatory Evaluation = Control is jointly shared; participants are very involved in the evaluation process.

Empowerment Evaluation = Participants are in control of the evaluation; the evaluator is a “critical friend”.Related to empowerment evaluation, an ongoing challenge for evaluators is how to help participants to be comfortable and confident enough to carry the evaluation forward.

Michael Quinn Patton (aka, Sheila's "best friend, MQP") presented this year, as he often does.

Michael Quinn Patton (aka, Sheila’s “best friend, MQP”) presented this year, as he often does.

 

Mandi Singleton, Research Assistant, attended a workshop entitled “21st Century Strategies for Conducting Excellent Interviews.”  It provided pointers for conducting long interviews, presented the concepts of “companioning” and motivational interviewing (MI). Mandi learned that:

  1. “Companioning” involves practicing effective listening skills that aim to increase the quality of responses. This process focuses on what virtues you – as the interviewer— bring to the table (e.g., being aware of your own biases, respecting the interviewee, maintaining focus, practicing open-mindedness, non-verbal communication/body language, interest and engagement). The interviewer should exude: 1) compassion – to actively engage the interviewee, and 2) detachment – understanding the interviewee while not taking on their emotions.
  2. Motivational interviewing is powerful in combating cases of resistance (i.e., a lack of agreement on goals between interviewer and interviewee) in participants. In MI, interviewers should focus on the dimensions of:
  • Collaboration rather than confrontation (e.g., engage as partners; don’t confront as to how they should change)
  • Evocation rather than education (e.g., evoke from participant, don’t push them to say)
  • Recognizing participants’ autonomy rather than expressing your authority, making them the agents of change and experts of their own situations

Mandi also picked up a few pointers on how to increase participant engagement and reduce dropout when conducting long-interviews:

  1. Consolidate
  2. Focus on relationships and building trust with the interviewee
  3. Be clear (on time and content); transparent
  4. Clarify own goals to get richer data (focus on quality vs. quantity)
  5. Avoid leading (leading interview to get the answers you want)
  6. Leave space for open-ended questions
  7. Break-up sessions to reduce interviewee fatigue
  8. Provide incentives
  9. Be sensitive to timing, make it convenient for participant
  10. Create buy-in, explain to participant how it will benefit them

 

We hope the first part of our AEA 2013 re-cap was informative for you! And now, for our “happy snaps”:

WP_20131017_018 2
Our good buddy and collaborator, Nichole Stewart!
WP_20131017_016
CRC’s own Leslie, Jill, & Taj
WP_20131017_013
A happy group of NYC evaluators!
WP_20131017_011
East Coast evaluators were willing victims of Sheila’s camera.
WP_20131017_012
Chris Lysy and Stephanie Evergreen- always keeping evaluation visually interesting!
WP_20131017_010
Stephen Axelrad and Taj
WP_20131017_014
More willing victims for Sheila’s camera and delicious wine.

 

CRC
jill@carsonresearch.com
No Comments

Sorry, the comment form is closed at this time.