Ways To Evaluate UDL Implementation

The UDA tool provides examples of how UDL guidelines can be implemented in large-scale assessment. How can you evaluate if the UDL guidelines were implemented as intended? Here are six suggestions for evaluating how UDL guidelines are implemented while designing and developing an assessment, along with specific examples on what to include with each suggestion.

 

1. Gather data to understand the range of needs and preferences of diverse test takers interfacing with online assessment system.

  • Test out online assessment system options in cognitive labs, focus groups, and user cadres.
  • Seek stakeholder feedback on usability of technology-delivered options and alternative options (such as recommended manipulatives) given the tested standard content and match with instruction in the standard.

2. Conduct content reviews on assessment content with general education teachers and other content experts.

  • Include an alignment check against intended item criteria. If an item is twinned or has multiple pathways after branching, make sure both paths or sets of items are evaluated against those criteria
  • Confirm through content review procedures that assessment content is consistent with familiar, quality instruction in classrooms.
  • Include criteria for evaluating multiple presentation modalities (e.g.,  video-based and text-based) to ensure that no student had an unfair advantage by having more or less information in one modality.
  • Include a check to determine if there was a different way to assess a construct (other than traditional multiple choice) without adding an additional cognitive burden.
  • Include criteria to evaluate technology-enhanced items are likely to engage students without introducing confusion and an unnecessary cognitive burden.
  • Include criteria for evaluating the complexity of vocabulary to determine if the included technical vocabulary is necessary to access the tested construct.
  • Include criteria to evaluate the level of difficulty and complexity in the assessment is consistent with other content areas at the same grade level.

3. Conduct accessibility and bias reviews on assessment content with special educators and other experts on learners from culturally and linguistically diverse backgrounds and varying support needs.

  • Include an accessibility criterion around all text, graphics, graphs, organizing tools, and models, ensuring that only information necessary to answer questions are included.
  • Include an accessibility criterion around all text to confirm items are clear, not confusing or distracting, and are at an appropriate level of complexity.
  • Conduct accessibility item reviews to evaluate whether students with visual and/or hearing impairments would be able to understand what is asked and answer the item.
  • Confirm through accessibility reviews that alternative options (such as recommended manipulatives) provide access to a wide range of students given the tested construct.  
  • Conduct bias and sensitivity item reviews to evaluate whether students from culturally and linguistically diverse backgrounds, including those whose dominant language is not English, would be able to answer the items.

 

4. Use think-aloud or cognitive lab methods with learners who will take the assessment to evaluate how they engage and interact with assessment content.

  • Intentionally include a wide range of students in cognitive labs, reflecting the diversity of potential test takers in terms of preferences, support needs and cultural and linguistic backgrounds.
  • Define expected response processes before administering cognitive labs, ensuring multiple acceptable/reasonable response processes are documented. (Include all the necessary stakeholder perspectives to establish these.)
  • Use think-aloud or cognitive lab methods to confirm unscored items serve its intended purpose and do not introduce confusion.
    • self-reflection items (e.g., “think-aloud” items, wonder questions)
    • organizing tools

5. Collect data on the effectiveness of assessment development, administration and delivery procedures through surveys, interviews, and/or focus groups.

  • Assessment Development
    • Collect data at the end of an item-writing event to measure item writer satisfaction with provided item-writing resources and content of training, including their understanding of UDL implementation in assessment design and development.
    • Collect data at the end of an item-writing event to measure item writer satisfaction and usefulness of item writing procedures for their ability to apply UDL principles when writing items.
  • Assessment Administration
    • Collect teacher input on quality and usefulness of score reports and/or system-provided feedback in informing instructional decisions for the full range of learners.
    • Elicit student and/or teacher input on the clarity and usefulness of assessment system feedback on student performance and system recommendations.

6. Analyze system data to evaluate effectiveness of online assessment system.

  • Analyze system data on test completion, unanswered items, use of available supports, time in system and possible interaction with diverse learner characteristics.
  • Collect input on adequacy of time to complete tasks for diverse test takers.
  • Evaluate the ability of online assessment system to integrate common assistive technology (AT) devices.
  • Evaluate whether tools and devices meet Web Content Accessibility Guidelines (WCAG).