https://github.com/web-platform-tests/wpt
Raw File
Tip revision: 7da4e3fdf745990e52bb535e8e6d317cd21f2121 authored by Madeleine Barowsky on 21 December 2018, 02:41:02 UTC
WIP to add native YUV support for lossy WebP without alpha in the GPU rasterization non-OOPR case.
Tip revision: 7da4e3f
README.md
core-aam: Tests for the Core Accessibility API Mappings Recommendation
======================================================================

The [Core Accessibility API Mappings Recommendation](https://www.w3.org/TR/core-aam-1.1/)
describes how user agents should expose semantics of web content languages to accessibility
APIs. This helps users with disabilities to obtain and interact with information using
assistive technologies. Documenting these mappings promotes interoperable exposure of roles,
states, properties, and events implemented by accessibility APIs and helps to ensure that
this information appears in a manner consistent with author intent.

The purpose of these tests is to help ensure that user agents support the requirements of
the Recommendation.

The general approach for this testing is to enable both manual and automated testing, with
a preference for automation.


Running Tests
-------------

In order to run these tests in an automated fashion, you will need to have a special
Assistive Technology Test Adapter (ATTA) for the platform under test. We will provide
a list of these for popular platforms here as they are made available.

The ATTA will monitor the window under test via the platforms Accessibility Layer,
forwarding information about the Accessibility Tree to the running test so that it
can evaluate support for the various features under test.

The workflow for running these tests is something like:

1. Start up the ATTA for the platform under test.
2. Start up the test driver window, select the core-aam tests to be run,
   and click "Start"
3. A window pops up that shows a test, the description of which tells the
   tester what is being tested. In an automated test, the test will proceed
   without user intervention. In a manual test, some user input or verification
   may be required.
4. The test runs. Success or failure is determined and reported to the test
   driver window, which then cycles to the next test in the sequence.
5. Repeat steps 2-4 until done.
6. Download the JSON format report of test results, which can then be visually
   inspected, reported on using various tools, or passed on to W3C for evaluation
   and collection in the Implementation Report via github.

**Remember that while these tests are written to help exercise implementations,
their other (important) purpose is to increase confidence that there are
interoperable implementations.** So, implementers are the audience, but these
tests are not meant to be a comprehensive collection of tests for a client that
might implement the Recommendation.


Capturing and Reporting Results
-------------------------------

As tests are run against implementations, if the results of testing are
submitted to [test-results](https://github.com/w3c/test-results/) then they will
be automatically included in documents generated by
[wptreport](https://www.github.com/w3c/wptreport). The same tool can be used
locally to view reports about recorded results.
back to top