W3C home > Mailing lists > Public > public-digipub-ig@w3.org > January 2017

Re: DPub-ARIA Testing

From: Shane McCarron <shane@spec-ops.io>
Date: Thu, 19 Jan 2017 07:21:49 -0600
Message-ID: <CAJdbnOAr1iNy2HZmGQ0AiQEqLwEnZWKp0dhoY=O1UWGpcHKy4g@mail.gmail.com>
To: Ivan Herman <ivan@w3.org>
Cc: W3C Digital Publishing IG <public-digipub-ig@w3.org>, W3C PF - DPUB Joint Task Force <public-dpub-aria@w3.org>
Tzviya and others have said that they will put their generated markup into
the test tool in order to demonstrate usage.  That''s what the textarea is
for - just as in Annotation testing.

There is no way in the model I have proposed to deal with a testimonial.  I
suppose I could change the test case so that it could also present a table
of all the terms with checkboxes where a user could click "we support this"
and then it would generate the requisite JSON for the test report.  Would
that suit?

On Mon, Jan 16, 2017 at 4:35 AM, Ivan Herman <ivan@w3.org> wrote:

> Hi Shane,
> first of all, sorry for the very late reply. Vacations, trips, and all
> that…
> I concentrate on the vocabulary (ie, [1]) testing at this point; the API
> mapping is a very different animal, and I would expect that testing to
> align with the way the mapping will be tested altogether.
> Per [1], the vocabulary testing is, in effect, some sort of a collection
> of testimonials. For each terms, we hope to get at least two testimonial of
> usage (plus some information if the soon-to-be-obsolete epub:type syntax is
> used). In this respect, I am not really sure I understand your description
> of using a text area: do we expect our 'testimonials' to be submitted as
> formal markup?
> My impression is that the situation may become a bit different: we may
> receive, from publisher A, a testimonial that says: "we use these and these
> ARIA terms, or equivalents". How would we fold it back into the final
> report?
> Cheers
> Ivan
> [1] https://www.w3.org/TR/dpub-aria-1.0/#exit_criteria
> On 26 Dec 2016, at 17:42, Shane McCarron <shane@spec-ops.io> wrote:
> As many of you know, we are entering a phase where we need to be testing
> both vocabulary use and support for the dpub-aria roles via the various AT
> APIs.  I have not really communicated with most of you before, so I thought
> I would send a long a quick overview of how W3C testing for these sorts of
> things works (will work).
> tl;dr: There is a w3c test framework and we can do the testing within that
> framework, gathering information about implementation support at a central
> site W3C uses for these purposes.
> The W3C has a test framework called "Web Platform Tests" or WPT.  This
> framework is a rich environment for exercising components of the web
> platform.  Dpub-ARIA is, of course, part of that platform.  If you want to
> learn about WPT, check out [1]
> Each W3C spec has a top level folder in this framework.  Within that
> folder each spec group is responsible for populating and validating the
> tests.  Each test is called a "test case".  Each "test case" can be
> automated, manual, or some other modes.
> Once tests are run, they produce a JSON file that is sent as input to the
> test results repository [2].  Results in that repo are processed by various
> reporting tools, including wptreport [3] to generate the reports that can
> help inform transitions from W3C CR to PR to Rec.
> The rest of this mail talks about how I plan to use these tools to
> exercise implementations of the dpub-aria and dpub-aam specifications.
> This will also be reflected and updated in a wiki at [4].
> The basic requirement here is that we examine a bunch of real-world
> content to demonstrate use of the terms we have introduced.  My simplistic
> test for this includes a single "test case" with a sub-test for each term.
> This "test case" has a textarea into which markup can be pasted and then
> evaluated.  The markup is evaluated against the terms.  The results can
> then be added to the collection of results at [2] so that we have a
> comprehensive view of what terms are used and how often.
> This style of testing is considered "manual" in that  a tester needs to
> paste in a test file in order to evaluate it.  It is POSSIBLE to automate
> the execution of these tests via WebDriver.  An example of such automation
> is included in the repository.
> The basic requirement here is that the various A11Y API implementations
> are evaluated to ensure that the DPub-ARIA roles are reflected correctly
> via the API mapping and passed to Accessible Technologies.  Spec-Ops and
> Igalia have recently worked together to develop an automated technique for
> this testing.  You can read up on how that works at [5], but in essence it
> queries the AT layer of the platform under test to see if the mappings for
> each defined role, state, or property are reflected.  It can also do things
> with events, but that isn't relevant to what we are doing.
> There is currently one "test case" with a sub-test for each role.  I may
> leave it this way, or I may split it out into a single "test case" per
> role.  That's a design decision, and will not have much impact on the use
> of the tests.
> Once the tests are complete, they can be run automatically *or manually*
> on the various platforms that have a defined accessibility mapping.
> Results will be sent to [2] in JSON and reports generated, as above.
> That's about it.  I don't know that we need any assistance with the
> development of the tests at this time.  As they mature it would be great if
> people could take a look and provide feedback!
> [1] https://github.com/w3c/web-platform-tests
> [2] https://github.com/w3c/test-results
> [3] https://github.com/w3c/wptreport
> [4] https://wiki.spec-ops.io/wiki/DpubTesting
> [5] https://spec-ops.github.io/atta-api/index.html
> --
> Shane McCarron
> Projects Manager, Spec-Ops
> ----
> Ivan Herman, W3C
> Digital Publishing Technical Lead
> Home: http://www.w3.org/People/Ivan/
> mobile: +31-641044153 <+31%206%2041044153>
> ORCID ID: http://orcid.org/0000-0003-0782-2704

Shane McCarron
Projects Manager, Spec-Ops
Received on Thursday, 19 January 2017 13:22:49 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:36:37 UTC