Why this matters
Accessibility is not solely the responsibility of the developers. It’s a team sport, requiring tight collaboration with product management, designers and testers.
Can explain why accessibility is a requirement
Team members possess varying levels of understanding.
The minimum level garners compliance. But when the reasons are taken to heart, your team’s performance can dramatically improve.
Accessibility isn’t just the right thing to do, it’s the smartest thing to do.
Living your organization’s values
Every organization has a set of values, often including core ethical tenets like treating people with respect and doing the right thing.
How does accessibility fit those values? How does ignoring accessibility breach them?
A tool for innovation
Accessible design and development builds better products for everyone. When teams put accessibility at the beginning of their processes, they create more valuable products for your enterprise.
26% of the US population has a disability that requires accommodation, making people with disabilities the largest minority in the United States. This adds up to billions of dollars in combined purchasing power.
Avoiding legal risk
Accessibility is the law. Designing and building accessibility into products also helps the enterprise avoid legal risk and liability due to a customer complaint.
Can characterize automated and manual testing
Manual testing is precisely that: a human actually testing the experience using the screenreader and browser combinations you need to support.
Experts can deliver an organized report of defects by severity. This is a necessary tool for improving the customer experience.
Limits of manual testing
A manual test isn’t the same as a usability study, but it is effective in uncovering the issues your customers experience.
Manual testing is performed by people, and perception of what constitutes a defect can vary slightly from one tester to the next. It will be helpful for your testers to reference your severity definitions, and use uniform testing acceptance criteria like MagentaA11y.com.
Automated tests find programmatic errors, but can’t describe actual customer experiences.
How to use automated scans effectively
Scanning tools quickly pinpoint syntax defects in code. Some flagged issues won’t affect the customer experience, but you should exercise scrutiny and manual testing if a web page is riddled with invalid code and errors.
Limitations of automated scans
Testing tools have value. But it’s important to understand their drawbacks. Even the most robust tools can identify less than half of the potential defects on a page.
Code can be inaccessible for a person using their keyboard or screen reader without being flagged as invalid markups by an automated tools.
- Automated scans can instantaneously test checkboxes for properly associated labels and other code attributes, but can’t tell you if the labels make sense.
- Automation tools can flag an image for missing alt text, but can’t tell you if it would be better for the screen reader to ignore that particular icon.
- Custom components, like an accordion expander, could be inaccessible with the keyboard and yet be formed of valid code that won’t raise an error.
Can test with assistive technology
Developers cannot leave testing to the QA team, simply throwing code over the wall and waiting for feedback. They have to test as they go. Developers avoid inefficiencies and bottlenecks when they learn how to use the keyboard and screen reader.
Semantic HTML and WAI-ARIA
Front-end developers may be able to piece elements together to look like a UX design that was handed to them, but they not know the meaning and purpose of different HTML elements beyond what they look like.
Can understand accessibility acceptance criteria
Acceptance criteria have to be specific enough to cover core functionality, including every quirk or difference between the five screen reader and browser combinations, but broad enough to not become overly verbose. MagentaA11y.com is one way to generate acceptance criteria.
What are the components of acceptance criteria
Understand each HTML element and how it fits into a group.
This is how the element will be read to the screen reader. For example, the name of a link is typically the inner text of the link. The name of an input is typically the label.
Any element has a role. It could be an input’s role is “radio” or a submit button’s role is “button.”
Many controls have a state. For instance, a checkbox input can be “checked” or “unchecked.”
Nearly all components work as part of a bigger context or group of elements.
For instance, radio inputs need a name. Headings should exist in an ordered logical way, starting with an H1, that is the page title, with major sections using an H2 at the beginning.
Can interpret accessibility assessments
When full assessments are performed, teams will need help and training on how to interpret, prioritize and act on that information. The content will often be based around WCAG criteria and may or may not offer techniques for remediation.
Assistive technology test suite
Access to testing tools can save time and money. It allows developers to be proactive instead of throwing code over the wall to QA, potentially creating a laborious feedback loop.
Prioritize your test suite by what devices and browser combinations your customers are using.
Setup of these tools can vary. For instance, if your team is already using Macs then they already have VoiceOver and can install NVDA using a virtual machine environment, without having to set up a separate physical PC.
Successful keyboard interaction is a prerequisite for testing with a screen reader.
PC + NVDA + Chrome
If you’re only going to test with one screen reader, it should be NVDA. It is free and it is very demanding of compliant code.
Mac + VoiceOver + Safari
If you’re testing with two screen readers, VoiceOver should be the second. VoiceOver is built into MacOS and pairs with Safari.
PC + JAWS + Chrome
Most of your customers with vision disabilities in the U.S. will be using JAWS because it’s subsidized by the federal government. JAWS is more forgiving of non-compliant code than NVDA or VoiceOver, so despite its market share, it is not always ideal as your sole testing platform.
iOS device + VoiceOver + Bluetooth keyboard
VoiceOver is built into iOS and pairs with Safari.
Android device + Talkback + Bluetooth keyboard
Talkback is a free screen reader for Android and pairs with Chrome.
Uniform automated testing tools
There are a multitude of free automated testing tools available. Simplify compliance by using one uniform tool used for development, QA testing and pipeline gating.
Definition of ready
UX includes accessibility annotation
Code is the UX for people using assistive technology. That experience needs to be defined by the UX team, not left to what uninformed code is rendered in the browser.
UX annotation should include notes for:
- Heading structure
- aria-labels for ambiguous controls
- alt text for images
- Correct semantic component (ex: is it a button or a link that just looks like a button)
Accessibility acceptance criteria are clearly defined
It is the responsibility of the product owner or product manager to define accessibility acceptance criteria for the team.
Without strong acceptance criteria, it’s easy for a developer to misunderstand or even dismiss the function of a UI for assistive technology.
Non-standard components reviewed with accessibility SME
When a non-standard, or unusually complex, component needs to be created, it’s important that developers review the work with the accessibility subject matter expert. Do this at the story refinement stage to define how to fulfill acceptance criteria.
Definition of done
Product demos use assistive technology
The product owner should be asking for demos to be performed with only the keyboard (no mouse) and the screen reader when time permits.
By setting this as the expectation for product demos, the developers are far less likely to ignore or fake this functionality.
Manual accessibility QA testing is complete
Developers should be testing code as they develop it, not a last-minute dash.
By agreeing that the work isn’t complete until it has passed manual testing, it sets the expectation that more development may be necessary to ship a great experience.
Uniform automated testing tools meet requirements
Simplify compliance by using one uniform accessibility analysis tool for development, QA testing and pipeline gating.