At eBay, we take accessibility very seriously. As mentioned, in our Accessibility Statement, we are committed to building a community enabled by people and supported by technology that’s open to the broadest audience possible. We’re committed to ensuring digital accessibility for people with disabilities. With those goals in mind, we are continually improving the user experience for everyone, and applying the relevant accessibility standards.
Imagine You are a Developer: Part I
Imagine you are a developer and you have requirements to make a web page fast, secure, available and accessible. These requirements are above and beyond the functional requirements. For two months, you balance your time: attending meetings, planning, coding, reviewing, integrating and testing. Priorities and deadlines keep your schedule tight, but finally, you release the product. After release, an independent accessibility audit is done and the auditors have found several accessibility issues, despite all of the testing that was done.
As you investigate how these issues slipped through, you discover that several issues are related to inaccessible images. These images were flagged because they were missing alt text and you quickly realize that this type of issue could have been caught as part of automated testing. To be continued...
Motivation for Automated Testing
Automated testing allows issues to be found upstream in early development stages like pre or post commit stages. These bugs can be fixed early on with minimal cost. On the other hand, the cost of fixing bugs exponentially grows as the product moves from development to QA to production.
The Accessibility Ruleset Runner
The eBay Accessibility Ruleset Runner automates a number of WCAG 2.0 AA recommendations, saving time on manual testing. This includes checking images for alt tags, validating heading structure, and much more. Because the Accessibility Ruleset Runner is released as open source, we encourage the community to contribute more rules in order to cover moreWCAG 2.0 AA recommendations.
We provide five examples of how to use the Accessibility Ruleset Runner:
- Chrome Developer Console
- NodeJS with Selenium/Mocha/Chai
- Python with Selenium
- Chrome Extension
- Java with Selenium/TestNG
Walkthrough using NodeJS with Selenium/Mocha/Chai
To help better illustrate how the tool works, let’s walk through our NodeJS with Selenium/Mocha/Chai example. Front end developers that use Node.js may find this particularly useful.
We assume the following are installed:
Step 0: Clone the project locally
git clone https://github.com/ebay/accessibility-ruleset-runner
Step 1: Install Package Dependencies
From the terminal, change directories to go into the appropriate examples folder:
Then, perform an npm install to install the required dependencies for the Selenium/Mocha/Chai framework:
Step 2: Invoke Ruleset Runners
The ruleset runner examples require zero configuration. This “one click” setup allows new users to quickly run the examples, to get an idea of what the ruleset runner does. In other words, the ruleset runners are preconfigured to test a default web page and the example can be run, using a single command:
npm run custom.ruleset.runner
After going through the example, developers may want to make modifications to test another website or include the Accessibility Ruleset Runner in their project.
Imagine You are a Developer: Part II
Let’s get back to our story, where you are the developer…
You research tools online and find the Accessibility Ruleset Runner. You quickly run through the NodeJS with Selenium/Mocha/Chai example and make the following modifications to test your web page using localhost.
You use npm run to invoke the custom ruleset runner and the results are shown on the console in a JSON array. You quickly discover that 3 images are inaccessible because they are missing alt tags.
npm run custom.ruleset.runner
However, you find that it is difficult to read JSON from the console and decide that you will create an HTML report from the results JSON array. You would like the HTML report to include additional information which will help others to quickly fix the issue. You want the HTML report to include information about the rule that failed, identification information for any elements that were flagged (screenshot, attributes, locator) and clear error messages.
Fortunately, you search the Topic Guide and find a sample HTML Report, which you use to investigate the 3 images that were flagged earlier.
Now that you see the power of using automated testing to find accessibility issues, your next steps are to introduce the Accessibility Ruleset Runner into your development pipeline. In addition, you plan on enhancing the HTML report to include links that will allow people to file bugs directly into your bug tracking system (e.g. Jira, Bugzilla). After that, you will expand upon the Chrome Extension example, which allows quick testing of web pages without having to modify code.
You are well on your way…
We are a proud contributor to open source accessibility tools and documentation. The Accessibility Ruleset Runner demonstrates how to automate accessibility testing, which can help catch bugs upstream with minimal cost.
Other open source contributions shown in our Accessibility Statement include:
- eBay MIND Patterns - design patterns for accessible web components
- OATMEAL - a collection of manual accessibility testing methods
Creating a Ruleset
In the Topic Guide, we include a link to some general principles for creating a ruleset. We created these general principles after reviewing publically available accessibility tools. These principles were used to build our Custom Ruleset. Later, we started using the Axe Ruleset from Deque Systems due to their alignment with these principles.
Rulesets should place an emphasis on 0 false positives. By having 0 false positives, there is no room for interpretation and teams can be required to have 100% pass rate prior to launching a new feature.
Rulesets should return a well formed JSON. JSON is also highly portable. Results can be stored in a database for tracking, aggregated/displayed in dashboards and even converted directly into user friendly HTML Reports.
Rulesets should be vetted against a library of html code snippets. There should be examples of good/bad code that pass/fail various rules, as expected. Covering a large number of code variations tends to make the ruleset more robust. See also Testing Methodology.
Contributions in terms of patches, features, or comments are always welcome. Refer to the Contributing Guidelines for help. Submit Github issues for any feature enhancements, bugs, or documentation problems as well as questions and comments.