Evaluating Accessibility and Inclusion
Over the past several months, Inciter has been pleased to work on a project with a summer camp network. The goal of the network is to enhance accessibility and inclusion for campers and staff with disabilities by providing capital improvements, professional development, staff training, research, and evaluation to both participating camps and the field at-large.
In many ways the project is similar to the other program evaluations that we work on with organizations, foundations, and agencies – large and small. We began this project by discussing the goals for the initiative and its theory of change, defined key camp audiences and stakeholder groups, and developed measurable indicators to direct instrumentation of data collection.
While we’re no strangers to evaluations involving complex constructs (check out our posts about evaluating advocacy efforts, for example), inclusion can be complicated. It’s multidimensional, differentially experienced, and it doesn’t have one universally adopted definition. Particularly when thinking about the inclusion of people with disabilities, though, one often first thinks about accessibility – namely, physical accessibility – but not only does accessible not equal inclusive, physically accessible does not equal fully accessible.
Being (fully) accessible is foundational to being inclusive, so what do we need to know and where do we need to go definitionally in order to evaluate?
First, we need to account for all of the multiple aspects of accessibility, as they are currently understood (bearing in mind that other aspects may come to light), that could be of relevance to a program. A helpful framework here can be derived from the U.S. Workforce Investment Act (WIA) / Workforce Innovation and Opportunity Act (WIOA), in which degree of accessibility is defined as “whether or not a person with a disability can meaningfully receive, participate in, and benefit from services” and overt preparedness to go beyond the minimum requirements for accessibility. A key question, then, to ask when accessing accessibility is: Does the program offer information, explanation, and support, along with necessary accommodations, to enable a person to use the full range of services offered?
Measuring accessibility in this framework focuses on three domains:
- Physical accessibility – The extent to which facilities are designed, constructed, or altered so that they are accessible and usable by people with disabilities.
- Communications accessibility – The extent to which program staff are able to communicate with people with disabilities as effectively as with others.
- Programmatic accessibility – The extent to which people with disabilities have access to the full range of services available to all service recipients regardless of disability.
There can potentially be many considerations to be addressed depending on the setting, the service population(s), and other program audiences (e.g., staff and stakeholders) who may have disabilities. But, the three domains provide a starting point for ensuring that accessibility considerations are built-in. Although it’s understandable that accommodations — changes made so that a person with a disability is able to fully participate – are initial ways to focus when thinking about accessibility, moving the programmatic ball forward to inclusion requires an understanding of accessibility in which a program is set up from the start to be accessible to all individuals.
Once accessibility is defined for a given program, we next need to move to measurement. As real-world evaluators, we know that in order to determine if any initiative is effective we must have good measurement processes and tools in place, but “good’ really means “right for the job”. It’s worth remembering that although program targets may not always be numerical in nature, which can be vexing to funders, if we accept the premise that measuring accessibility is fundamental to evaluating inclusion we are presented with the opportunity to measure a variety of important tangible and financial indicators. Together, these can provide a real-life snap shot of the situation. (This will assist us in grappling with the more intangible and non-financial indicators of inclusion down-the-line.)
We can put these ideas into practice in how we tailor evaluative observations and measurements of tangible indicators. Examples of observable elements that can translate into measurable indicators include:
- Spaces: Is each room that may be needed accessible by wheelchair and mobility aid users? (Consider other mobility issues as well, e.g., steepness of slopes, height of buzzers, access to seating, distance of parking from destination, heavy doors.)
- Formats and Interpretation: Are there both visual and non-visual communications items available for use? Are these in accessible formats? Especially for events, is American Sign Language (ASL) interpretation and CART captioning available?
- Language: Is language that operates on ability assumptions (e.g., “I need everyone to stand now.”) being used, or is broader language used (e.g., “If you are able, please stand with me.”
- Lighting: Are types of potentially triggering lights (e.g., fluorescent lighting, strobe lights, and flash photos) avoided and/or are people warned they might be present?
- Restrooms: Are there sufficient restrooms that are both physically accessible and designated as gender neutral, for purposes of gender inclusion and to enable caregivers to help when needed?
- Sensory-Friendly Spaces: Is there access to quiet and/or lesser stimulating spaces for those with sensory needs?
- Transportation & Remote Options: Are paratransit or other services arranged/offered? If transportation cannot be provided, are there video options?
As you can probably tell, evaluating movement toward and achievement of accessibility is doable, even though it may require a bit more time, resources, and forethought to make sure that an evaluation is comprehensive. Greater challenge arises, however, in successfully evaluating inclusion.
A prevailing definition of inclusion for disability is “individuals with disabilities have the opportunity to participate in every aspect of life to the fullest extent possible. These opportunities include participation in education, employment, public health programming, community living, and service learning.” (CDC) Inclusion extends beyond any one situation or setting. In the context of a youth-serving program like camp, what this might look like is fully including youth (and staff) with disabilities in everyday activities, in large part through mindful accessibility measures, and encouraging them to have roles similar to their peers who do not have a disability to further build their capacity to participate. Together, this has a radiating impact on program culture (and society) to be more inclusive for all people.
At Inciter, all together this leaves us thinking about how we can apply the best practices of measurement, gathering end-user feedback, and “tried and true” methods of program evaluation for asking: “Are knowledge, attitudes, and beliefs really being changed?”
Depending on the evaluation context, this might look like bringing together, in various amounts, validated observation tools and survey questions that look at engagement and safety in recreation and learning opportunities (e.g., the YPQA, SOPLAY, etc.), customized tools informed by Universal Design principles, and mixed methods evaluation for examining affective outcomes and gathering feedback (e.g., surveys, interviews, focus groups).
Ultimately, to evaluate inclusion one must look holistically at whether a program is both accessible AND if the organization is directly and indirectly communicating that it values differences in ability including vis a vis the accessibility steps it’s taken.
To quickly wrap up, key steps in the inclusion effort include:
- Actually paying attention to diversity and inclusion (easier said than done!)
- Organizational decision-makers determining they believe in inclusion initiatives and actively showing their support for them
- Engaging a planning group that is as diverse as possible
- Carefully considering the context(s) of data collection so benchmarks are relevant
- Using actual data to set realistic targets that can show real, rather than idealized change
- Making sure that the people leading an initiative learn from their target recipients and audiences directly
To dig deeper into measurement approaches, here are some examples of measurement of accessibility/disability inclusion (e.g., built environment, education, employment, and public transportation and transportation infrastructure highlighted):
Have you been involved in an evaluation of disability accessibility and/or inclusion and have lessons learned to share? Drop us a line! firstname.lastname@example.org.