Re: Communicating between the WPT test window and a separate application on the client

Okay - now that I am back at a computer...


On Fri, Jul 15, 2016 at 12:40 PM, David Brett <dbrett@microsoft.com> wrote:

> I’d like to suggest another architecture that could be simpler. Instead of
> creating a line of communication between the AT and the main browser window
> for each test, we could instead pass the entire list of files (and
> requirements) at once, let the same code that manages the AT handle running
> through the tests, and then pass all the data back at the end. Here is my
> own (terrible ASCII art) diagram:
>
>
>
> WEB BROWSER
>
>  MAIN WINDOW
>
>      ^
>
>      |
>
>    MAGIC
>
>      |
>
>      v                     CHILD WINDOW
>
> LOCAL AT SHIM  <----->  FOR INDIVIDUAL TESTS
>
>
>
> Obviously we have to make that “magic” connection only twice instead of
> hundreds of times which will most likely be more reliable and save time. I
> don’t think this will result in any extra work if we use A11y, since the
> system is already set up to handle a list of tests.
>

Hmm... well, hundreds of connections wouldn't be a big deal but my plan was
to use websockets so there would be one connection.  Regardless, I am not a
fan of this approach given the way the WPT works and the general purpose of
W3C / WPT tests.  In WPT, tests should be discrete things that exercise a
given feature.  That way the test report shows each feature on the y axis,
each implementation on the x axis, and clear indications of what features
are supported on multiple implementations.  That's just a reporting thing,
but it is sort of important.

Beyond that, I don't like the idea of the AT shim needing to run the tests.
 1) it would mean that logic would need to be reproduced in every AT shim
on every platform.  2) the tests need to run WITHOUT the AT shim honestly.
I should be able run them manually if I had to.  The AT shim is an
automation feature that, on platforms where it works, would allow the tests
to run automatically.  But if I can't do it... I should still be able to
bring up the test, read the requirements, click the thing, and interrogate
the accessibility tree myself to see if it has the right stuff in it.
That's awful and I would never want to do it, but we shouldn't
disenfranchise some platform that just can't get the shim working.

I updated the diagram in the wiki to show all the pieces in WPT:

 WEB BROWSER   <----->  CHILD WINDOW         <----- HTTP TO ------>
WPTSERVE  <---> ARIA TEST CASES
 MAIN WINDOW            FOR INDIVIDUAL TEST         TEST SERVER
              IN HTML AND JS
                              ^          ^
                              |           \-------- WEBSOCKET ---->
WPTSERVE  <---> ARIA
                              |                     TO TEST SERVER
              AUTOMATION
                              |
              RELAY
                            MAGIC  <--------------- WEBSOCKET ---->
WPTSERVE  <---> WEBSOCKET SCRIPT
                              |                     TO TEST SERVER
                              v
                         LOCAL AT SHIM




> As far as how that communication works, I think it would be pretty
> straightforward to include the test requirement files in a directory of the
> WPT repo and let A11y iterate over those. Sending the results through a
> websocket once the tests are completed shouldn’t be too hard either.
>

What is A11y in this context?  The way WPT works is that you select what
tests to run through the execution interface (URI) and then the framework
cycles over them.  Yes, those tests live in folders on the server - in
whatever sort of hierarchy makes sense to the test authors, but under a top
level folder that matches the shortname of the spec (e.g., wai-aria).  If
you want to run a subset of them (e.g., the tests for the button role) you
might reference /wai-aria/roles/button to get everything in that folder and
below.

The WEB BROWSER MAIN WINDOW in the diagram above loads each test file from
the selected folders into the CHILD WINDOW FOR INDIVIDUAL TEST.  Those
files load some JS and have some embedded JS (probably) that initialize the
test and then evaluate the assertions and render a result.  When all of the
tests in a given file have been executed (there could be many tests and
"subtests") the results are posted back to the WEB BROWSER MAIN WINDOW
where the results are accumulated and reported. Progress is tracked.
Rinse.  Repeat.  When everything is done, WEB BROWSER MAIN WINDOW can emit
the resulting JSON of the results in a form that the reporting engine knows
how to process.

Hope this helps.



>
>
> *From:* Gunderson, Jon R [mailto:jongund@illinois.edu]
> *Sent:* Friday, July 15, 2016 7:30 AM
> *To:* Shane McCarron <shane@spec-ops.io>; public-test-infra <
> public-test-infra@w3.org>; public-aria-test@w3.org
> *Subject:* RE: Communicating between the WPT test window and a separate
> application on the client
>
>
>
> Shane thank you again for getting this conversation started and your
> interest in getting ARIA testing as part of WPT.
>
>
>
> Just for clarification.  The “fake AT” is basically a application that
> exposes information about a platform specific accessibility API as WPT or
> some other tool exercises a ARIA test page.   So the application needs to
> look the accessibility tree and related events and get that information
> back into the browser for comparison with expected results for us to
> utilize the WPT framework to its fullest extent.
>
>
>
> Jon
>
>
>
>
>
> *From:* Shane McCarron [mailto:shane@spec-ops.io <shane@spec-ops.io>]
> *Sent:* Friday, July 15, 2016 8:43 AM
> *To:* public-test-infra <public-test-infra@w3.org>;
> public-aria-test@w3.org
> *Subject:* Communicating between the WPT test window and a separate
> application on the client
>
>
>
> Hi!
>
>
>
> The ARIA Working Group is investigating various ways to automate the
> testing of ARIA - which requires testing the accessibility api and its
> communication with assistive technologies (AT) on a bunch of platforms.
> Obviously, this is a bit of a challenge.  The current thinking is that a
> fake AT can be provided on each platform.  The fake AT is started by the
> tester (or test automation environment) prior to starting a test run.  Once
> it is running and has found the test window, it will capture the
> accessibility tree and events as the tests set up and manipulate the DOM.
> Simple enough.
>
>
>
> Except, of course, for getting the information from the fake AT back into
> the test window.  Consider the following (terrible ASCII art) diagram:
>
>
>
>  WEB BROWSER   <----->  CHILD WINDOW
>
>  MAIN WINDOW            FOR INDIVIDUAL TEST
>
>                               ^
>
>                               |
>
>                             MAGIC
>
>                               |
>
>                               v
>
>                          LOCAL AT SHIM
>
>
>
> The "MAGIC" is where I am playing right now.  Here are some of my ideas:
>
>    1. Shim is passed a URI on the WPT server on startup (or finds the URI
>    when it finds the test window). Communicates with it through a websocket,
>    and the window in which the test is running communicates with the same
>    websocket endpoint.  Data is relayed that way.  This seems the most
>    portable to me.
>    2. Shim runs a simple HTTP listener.  Child window communicates with
>    that using HTTP (websocket or simple HTTP GET) on localhost.  This requires
>    implementing a messaging stack... which doesn't feel very easy on every
>    platform but is probably do-able.  It might also violate CORS stuff, but
>    again - do-able.
>    3. Rely on some sort of native messaging that is platform specific.
>    This doesn't feel scalable to me.  It would also mean modifying the WPT
>    part of this any time we wanted to add another platform that had a
>    different native messaging capability.
>    4. Use a ServiceWorker in some magical way that I probably don't
>    understand.  Feels like a steep learning curve.  Also, they don't seem to
>    be widely supported yet.
>
> My hope is that some of you have already thought about a similar problem
> (there is a native application running on the platform under test that
> needs to send messages into the test in order to evaluate success or
> failure).  So... any ideas?
>
> --
>
> Shane McCarron
>
> Projects Manager, Spec-Ops
>



-- 
Shane McCarron
Projects Manager, Spec-Ops

Received on Friday, 15 July 2016 22:23:04 UTC