“ Accessibility is an important aspect of testing, but testers do not scale...without a little help from automation.
I’ve been an acessibility specialist for seven years and a designer for a baker’s dozen before that. I didn’t set out to become an accessibility advocate; the job chose me. Well, I took a chance fixing a website, and found a calling instead.
The work is rewarding but difficult. I spend a lot of time looking for ways to scale my testing efforts.
Accessibility requires manual testing to be thorough. I test 3-4 browsers with three and sometimes four screen readers. I zoom layouts to 200, 300, 400%. I navigate using a keyboard. I reason about instructions, upload documents, and fill out forms. I have not found a way to reliably automate many of these tests.
But there are a few parts of accessibiilty testing that can be automated. There are also some good ways to scale individual and small team efforts.
Automated testing, part I
Many development teams use continuous integration (CI) servers to run tests and build artifacts. I saw an opportunity to include accessibility tests as part of those CI workflows. I added the axe-core testing library to end-to-end tests first. I programmed tests to run when pages loaded, when errors triggered, and when modal dialogs or flyout menus opened. I also ran tests if there were complex interactions or multi-step processes.
The tests were informative, but not exhaustive. They took a long time to run and only caught 30-35% of issues. Still, they were a good first step and created an accessibility perimeter.
Automated testing, part II
I took the knowledge I’d gained writing page-level tests and applied them to design systems next. I used the axe-core library again, and paired it with Cypress component testing. Cypress was easy to install, and ran locally and on the CI server. I pulled data from the axe-core results object to write custom error messages. This made it easier to find accessibility errors in log files and gave me information to fix them.
Design systems are a multiplier
Focusing on design systems turned out to be an excellent way to scale my efforts. By focusing on single components, tests were easier to write. I reviewed less false positives. Bugs fixes happened faster and made big improvements for consumers. If I or a teammate fixed one component violation, it could improve hundreds of component instances downstream.
The benefit of well-tested components
Well-tested components pay dividends for as long as they're in use. Maintainers receive early feedback that their code has issues to fix. Accessibility issues that get fixed earlier in the development process cost less and ship sooner.
Consumers benefit from well-tested components. Developers can build rich interfaces, confident they're shipping more accessible software. This built-in warning system does garner extra costs in time to write and test, but pays off in fewer bugs and higher quality.
Discovering DevOps [ 1 ]
I was having a blast standing up automated tests and watching the green lights fly by on my screen. But I was also spending a lot of time writing the same code to set up and tear down test environments. That became my introduction to DevOps.
Docker and the Bash shell
By this time I was starting to run into cross-platform issues. Intel vs. ARM chips. Mac vs. Linux. Docker vs. native. I caught myself saying "Hm, works on my machine" and knew it was time to scale up again.
I took a few weeks to learn about the Bash shell and how shell commands could be used to build a recipe called a Dockerfile. I used that Dockerfile to run tests inside a portable container. By managing code in Docker, I ensured the tests would run in many environments without changing code.
Next steps
I'm only getting started with respect to scaling and automation. I've discovered Terraform and the possibilities it offers. My vision is to build entire test pipelines with a single
> terraform apply
command. I'll start small, building secure pipelines in GitHub Actions, then expanding into Amazon Web Services (AWS).