Remediate accessibility issues faster and write smarter code with fewer new defects

Why this matters

Accessibility is not solely the responsibility of developers, but it is ultimately their code that your customers will interact with.

Accessibility is a team sport, requiring tight collaboration with product management, designers and testers.

Team commitment KPIs

What commitments must the team agree to meet if your organization is going to be successful?

What obstacles must be acknowledged and removed for your developers to create inclusive experiences?

What’s covered

Core responsibilities

Can explain why accessibility is a requirement

Team members can possess varying levels of technical understanding, but all should be alert to the basic reasons accessibility is a requirement.

Accessibility target policies

At a minimum, any team needs to understand accessibility targets exist, are defined by enterprise policy and enforced by leadership.

This bare minimum adoption achieves some level of compliance, even when it’s begrudging.

However, if the reasons for accessibility policies are taken to heart in service of the customer, products can dramatically improve.

Living your organization’s values

Accessibility isn’t just the right thing to do, it’s the smartest thing to do.

Every organization has a set of values, often including core ethical tenets like treating people with respect and doing the right thing.

How does accessibility fit those values? How does ignoring accessibility breach them?

A tool for innovation

Accessible design and development builds better products for everyone. When teams put accessibility at the beginning of their processes, they create more valuable products for your enterprise.

Competitive advantage

26% of the US population has a disability that requires accommodation, making people with disabilities the largest minority in the United States. This adds up to billions of dollars in combined purchasing power.

Accessibility is the law. Designing and building accessibility into products also helps the enterprise avoid legal risk and liability due to a customer complaint.

Can characterize automated and manual testing

Manual testing

Manual testing is precisely that: a human actually testing the experience using the screenreader and browser combinations you need to support.

Experts can deliver an organized report of defects by severity. This is a necessary tool for improving the customer experience.

Limits of manual testing

A manual test isn’t the same as a usability study, but it is effective in uncovering the issues your customers experience.

Manual testing is performed by people, and perception of what constitutes a defect can vary slightly from one tester to the next. It will be helpful for your testers to reference your severity definitions, and use uniform testing acceptance criteria.

Automated testing

Automated tests find programmatic errors, but can’t describe actual customer experiences. Just like a spell checker, automation can flag non-issues while missing legitimate problems because it can’t understand context and intention.

How to use automated scans effectively

Scanning tools quickly pinpoint syntax defects in code. Some flagged issues won’t affect the customer experience, but you should exercise scrutiny and manual testing if a web page is riddled with invalid code and errors.

Limitations of automated scans

Testing tools have value. But it’s important to understand their drawbacks. Even the most robust tools can identify less than half of the potential defects on a page.

Code can be inaccessible for a person using their keyboard or screen reader without being flagged as invalid markup by an automated tools.

Practical examples
  • Automated scans can instantaneously test checkboxes for properly associated labels and other code attributes, but can’t tell you if the labels make sense.
  • Automation tools can flag an image for missing alt text, but can’t tell you if it would be better for the screen reader to ignore a particular decorative icon.
  • Custom components, like an accordion expander, could be inaccessible with the keyboard and yet be formed of valid code that won’t raise an error.

Can test with assistive technology

Developers cannot leave testing to the QA team, simply throwing code over the wall and waiting for feedback. They have to test as they go. Developers avoid inefficiencies and bottlenecks when they learn how to use the keyboard and screen reader.

Semantic HTML and WAI-ARIA

Front-end developers may be able to piece elements together to look like a UX design that was handed to them, but they not know the meaning and purpose of different HTML elements beyond what they look like.

Can understand atomic accessibility acceptance criteria

Acceptance criteria have to be specific enough to cover core functionality, including every quirk or difference between the five screen reader and browser combinations, but broad enough to not become overly verbose. is one way to generate acceptance criteria.

What are the components of acceptance criteria


This is how the element’s purpose will be read to the screen reader. For example, the name of a link is typically the inner text of the link. The name of an input is typically the label like, “First name”.


Every element has a role. A radio button’s role is “radio”. A submit button’s role is “button.”


Many controls have a state. For instance, a checkbox input can be “checked” or “unchecked.” A toggle switch can be “on” or “off”.


Nearly all components work as part of a bigger context or group of elements.

For instance, a collection of radio inputs need a group name. Headings should exist in an structured and logical pattern, starting with an H1, (typically the page content title), with major sections using an H2 at the beginning.

Every interactive element can have all of these criteria. Non-interactive elements will vary. For instance, headings have no state property.

Can interpret accessibility assessments

When full assessments are performed, teams will need help and training on how to interpret, prioritize and act on that information. The content will often be based around WCAG criteria and may or may not offer techniques for remediation.

Assistive technology test suite

Access to testing tools saves time and money.

It allows developers to be proactive instead of throwing code over the wall to QA testers, immediately creating a laborious feedback loop.

Prioritize your test suite by what devices and browser combinations your customers are using.

Setup of these tools can vary. For instance, if your team is already using Macs then they already have VoiceOver and can install NVDA using a virtual machine environment, without having to set up a separate physical PC.



Successful keyboard interaction is a prerequisite for testing with a screen reader.

PC + NVDA + Chrome

If you’re only going to test with one screen reader, it should be NVDA. It is free and it is very demanding of compliant code.

Mac + VoiceOver + Safari

If you’re testing with two screen readers, VoiceOver should be the second. VoiceOver is built into MacOS and pairs with Safari.

PC + JAWS + Chrome

Most of your customers with vision disabilities in the U.S. will be using JAWS because it’s subsidized by the federal government. JAWS is more forgiving of non-compliant code than NVDA or VoiceOver, so despite its market share, it is not always ideal as your sole testing platform.


iOS device + VoiceOver + Bluetooth keyboard

VoiceOver is built into iOS and pairs with Safari.

Android device + Talkback + Bluetooth keyboard

Talkback is a free screen reader for Android and pairs with Chrome.

Uniform automated testing tools

There are a multitude of free automated testing tools available. Simplify compliance by using one uniform tool used for development, QA testing and pipeline gating.

Definition of ready

UX includes accessibility annotation

Code is the UX for people using assistive technology. That experience needs to be defined by the UX team, not left to what uninformed code is rendered in the browser.

UX annotation should include notes for:

  • Heading structure
  • aria-labels for ambiguous controls
  • alt text for images
  • Correct semantic component (ex: is it a button or a link that just looks like a button)

Atomic accessibility acceptance criteria are clearly defined

It is the responsibility of the product owner or product manager to define atomic accessibility acceptance criteria for the team.

Without strong acceptance criteria, it’s easy for a developer to misunderstand or even dismiss the function of a UI for assistive technology.

Non-standard components reviewed with accessibility SME

When a non-standard, or unusually complex, component needs to be created, it’s important that developers review the work with the accessibility subject matter expert. Do this at the story refinement stage to define how to fulfill acceptance criteria.

Definition of done

Product demos use assistive technology

The product owner should be asking for demos to be performed with only the keyboard (no mouse) and the screen reader when time permits.

By setting this as the expectation for product demos, the developers are far less likely to ignore or fake this functionality.

Manual accessibility QA testing is complete

Developers should be testing code as they develop it, not a last-minute dash.

By agreeing that the work isn’t complete until it has passed manual testing, it sets the expectation that more development may be necessary to ship a great experience.

Uniform automated testing tools meet requirements

Simplify compliance by using one uniform accessibility analysis tool for development, QA testing and pipeline gating.

Developer KPIs

Track commitment and remove obstacles to inclusion

Download KPIs as a CSV
Core responsibilities
Testing for accessibility
Definition of ready
Definition of done
Download KPIs