My Process for Accessibility Testing
This is a guide. This document intends to provide insight into the steps that I take when evaluating the accessibility of a page. It can be used to estimate the work needed to do any kind of accessibility evaluation in general, or help inform your own process.
My own process is focused on applications typically generated for enterprise applications– that is, applications that are not specifically designed or intended for mobile device use. In these cases, I will test things like browser window resizing but won’t test specifically on mobile devices (focus more on the viewport size, which typically can translate pretty okay if needed).
I think this is important for Developers to learn, and I’ve covered it more in talks like Shift Left and Continuous Accessibility. I definitely recommend going through those talks if you want to learn more! This post is more about the step-by-step process that I use when checking a site, page, or flow for WCAG conformance or general accessibility.
Getting Started
- Open up the page in Chrome
- Do a general check:
- Is everything there that I expect to be rendered?
- Does the font size seem large enough?
- Are there any colors that seem like they might not pass color-contrast?
- Is there anything that sets off any red flags? (this intuition grows with experience)
- Using my keyboard only, try to navigate through the page
- Am I able to reach all of the elements?
- Do interactive controls respond to the correct keys?
- Test adaptability
- Use browser zoom to 200% and check for reflow
- Quickly zoom up to 400% (this is coming soon!) and see where the challenges might be.
- Use the browser settings and change the default text size. Observe how the design adapts to text-only resizing (I am observing to see if the design remains proportional or does everything suddenly seem squished in boxes? Or worse, are items cut off or hidden entirely?)
- Use the OS settings to change sizes.
- Take notes/screenshots where things break
- Test focus states
- TAB/Arrows/ESC - do they work as I expect
- Where does focus go when something that I’ve opened is then closed?
- When a page transition occurs, what happens to focus?
Digging Deeper
- Open up DevTools:
- Check the console log for errors
- Inspect the page’s markup. I can get a feel for how an evaluation is generally going to go, just from looking at the markup. If there are a ton of nested divs and spans, this tells me that I am probably not going to have a good time (from my experience). If the page’s content is mostly contained within landmark elements, this is a good sign- an indication of the care needed to implement an accessible website.
- Run the aXe or Accessibility Insights browser extension(s) and see how many errors show up. Make a note of what issue has the highest number of errors (it’s typically color contrast). Take some screenshots and look for some patterns in the errors. Maybe fixing one component can improve the whole page (because the component is used multiple times).
- Test with VoiceOver. Open the page in Safari and then open up VoiceOver and browse the page.
- Open up the elements rotor and see if the lists and groupings make sense.
- Make notes of things that are not conformant or even just difficult to use and could be improved for usability.
- Navigate through the page from top to bottom while VoiceOver is open.
- Test with NVDA. Switch to my Windows machine. Start NVDA and then open up Firefox and browse the page. Note here that this is the opposite of the macOS process- the screen reader should be turned on first, and then the browser opened, for the most consistent results.
- Take notes of things that are not conformant or usable.
- Navigate through the page using only the keyboard while NVDA is open.
- Test with JAWS. If possible, also test with Microsoft Edge and JAWS as this is the most common combination used at enterprise companies and government offices.
- Test high-contrast mode on Windows.
- Test color contrast (I like the Colorblindness Emulator browser extension).
At the very very least, test with Safari/VoiceOver on macOS and Firefox/NVDA on Windows.
👉_Note: At this time, I avoid ChromeVox and Microsoft’s Narrator. They are too buggy and produce false positives. If the app supports mobile device use, then I will test on an Android phone and an iPhone. _
Browsing Code
If I have access to the codebase:
- run a static code analysis (like ember-template-lint)
- Run a dynamic analysis (ember-a11y-testing). This will show me what code needs to be fixed according to the automated rules that we already have.
- Look through the CSS:
- use of the content pseudo element
- “hidden” (is it visually hidden, screen-reader hidden, or both? What was the intent?)
- border/outline set to “none” or transparent?
Conversations
Generally, my goal is to have conversations with designers as soon as I can! The earlier in the process that we can catch any potential defects, the better (the same as any part of software design).
I will try to file any issues I have found, along with screenshots where possible. I use CleanShot (requires a subscription…but totally worth it) and add annotations where I can. I will prioritize based not only on WCAG conformance but also impact to the user:
- Blocker (e.g., nested interactive elements. Should be addressed with near immediacy)
- Critical (e.g., prevents the user from completing a primary function/flow of the app)
- Major (e.g., the user has to find another way to perform an action but still can, or cannot use a minor feature that is not the primary function of the app)
- Minor (e.g., a main element with the role of main. Redundant but does not harm the user)
If I think that something needs an explainer, I’ll try to create a CodePen (and if not sensitive info, I’ll make it public and add it to my A11y CodePen Collection so the information can be browsed/used later). I’ve found that both designers and developers appreciate a good CodePen, as it helps to have the code and the browser rendering side by side. It also is effective for framework-agnostic examples, as it focuses solely on the outcome of what is rendered to the browser.
With developer teams, I’ll share useful sites such as:
- https://a11y-automation.dev
- https://www.a11yproject.com/checklist/
- https://dequeuniversity.com/screenreaders/
When possible, I will also try to include some additional information:
- Explain the desired outcome (e.g., “a user with high-contrast mode turned on needs to be able to see the borders of the buttons and other interactive controls”)
- Provide 1 or 2 options for designers/developers to consider, but trust that they are also able to be creative and come up with improved solutions themselves
- Explain the impact to the user (blocker, critical, major, minor) and add any WCAG references available.
Closing
I hope this has given you some ideas for how you can integrate accessibility testing into your own testing process!
Until next time. -M