W3C home > Mailing lists > Public > public-test-infra@w3.org > July to September 2016

Re: Communicating between the WPT test window and a separate application on the client

From: Shane McCarron <shane@spec-ops.io>
Date: Fri, 15 Jul 2016 14:00:29 -0500
Message-ID: <CAJdbnODc8qJCr=M0R4jNOa8hfSL4oQTYootqM8VcaVmesf7ktA@mail.gmail.com>
To: Dylan Barrell <dylan.barrell@deque.com>
Cc: David Brett <dbrett@microsoft.com>, public-aria-test@w3.org, Jon R Gunderson <jongund@illinois.edu>, public-test-infra <public-test-infra@w3.org>
(As I mentioned - bad at being on holiday.)

Comments inline:

On Fri, Jul 15, 2016 at 1:34 PM, Dylan Barrell <dylan.barrell@deque.com>
wrote:

> I would like some clarifying information:
>
> 1) Do you have an opinion on what language the tests will be written in?
> Would it be a requirement to have this be bindable to any language or do we
> just propose picking one?
>

HTML / JS and some declarative data in JSON.


> 2) In your diagrams, there is some control instance running the browsers.
> In the Web testing world, the communication would normally be along the
> lines of
>
> TEST RUNNER <-----------> BROWSER AUTOMATION (e.g. WebDriver)
> <-----------> BROWSER
>

I guess I would refer you to the Web Platform Tests documentation at
http://testthewebforward.org/docs/

There is a 'wptrunner' module that can do some test automation via
WebDriver, but that is outside the scope of what we are talking about.  The
tests should either being full automated (in which case WebDriver is not
required) or require some manual operations but be automatable via
WebDriver.  But they need to be runnable WITHOUT WebDriver as well (for
environments that don't support it).



>
> How does this picture map to the picture(s) you drew above? Could it be
> seen as follows
>                            <-----------> BROWSER AUTOMATION (e.g.
> WebDriver) <-----------> BROWSER
> TEST RUNNER
>                            <-----------> LOCAL AT SHIM <-----------> OS
>
> Where the test runner is responsible for communicating with both the
> browser automation AND the AT shim?
>

No.  Browser Automation is not part of the model here.  WPT already has
rich support for this, but it is at a higher level. The "Test Runner" in
WPT is running in the browser, opening child windows into which each test
is loaded and executed.


>
> If this is the case, then the logic for executing individual or batch
> tests is inside the test runner and could be seen as essentially an
> expansion of the WebDriver API. If this is the case, it could be bound to
> any language including JS (Node), Java, etc.
>

Sorry - it's not the case.

Now - back to my regularly scheduled vacation.


>
> --Dylan
>
>
> On Fri, Jul 15, 2016 at 1:50 PM, Shane McCarron <shane@spec-ops.io> wrote:
>
>> I'm out of position right now. I understand what you are proposing, but I
>> think it flies in the face of the general architecture of tests under WPT.
>> Basically in the w3c you need to have individual tests for each feature and
>> those should be individually executable.  I can imagine how to accomplish
>> that in your model... Let me ponder a little more.
>>
>> On Jul 15, 2016 12:40 PM, "David Brett" <dbrett@microsoft.com> wrote:
>>
>> I’d like to suggest another architecture that could be simpler. Instead
>> of creating a line of communication between the AT and the main browser
>> window for each test, we could instead pass the entire list of files (and
>> requirements) at once, let the same code that manages the AT handle running
>> through the tests, and then pass all the data back at the end. Here is my
>> own (terrible ASCII art) diagram:
>>
>>
>>
>> WEB BROWSER
>>
>>  MAIN WINDOW
>>
>>      ^
>>
>>      |
>>
>>    MAGIC
>>
>>      |
>>
>>      v                     CHILD WINDOW
>>
>> LOCAL AT SHIM  <----->  FOR INDIVIDUAL TESTS
>>
>>
>>
>> Obviously we have to make that “magic” connection only twice instead of
>> hundreds of times which will most likely be more reliable and save time. I
>> don’t think this will result in any extra work if we use A11y, since the
>> system is already set up to handle a list of tests.
>>
>>
>>
>> As far as how that communication works, I think it would be pretty
>> straightforward to include the test requirement files in a directory of the
>> WPT repo and let A11y iterate over those. Sending the results through a
>> websocket once the tests are completed shouldn’t be too hard either.
>>
>>
>>
>> *From:* Gunderson, Jon R [mailto:jongund@illinois.edu]
>> *Sent:* Friday, July 15, 2016 7:30 AM
>> *To:* Shane McCarron <shane@spec-ops.io>; public-test-infra <
>> public-test-infra@w3.org>; public-aria-test@w3.org
>> *Subject:* RE: Communicating between the WPT test window and a separate
>> application on the client
>>
>>
>>
>> Shane thank you again for getting this conversation started and your
>> interest in getting ARIA testing as part of WPT.
>>
>>
>>
>> Just for clarification.  The “fake AT” is basically a application that
>> exposes information about a platform specific accessibility API as WPT or
>> some other tool exercises a ARIA test page.   So the application needs to
>> look the accessibility tree and related events and get that information
>> back into the browser for comparison with expected results for us to
>> utilize the WPT framework to its fullest extent.
>>
>>
>>
>> Jon
>>
>>
>>
>>
>>
>> *From:* Shane McCarron [mailto:shane@spec-ops.io <shane@spec-ops.io>]
>> *Sent:* Friday, July 15, 2016 8:43 AM
>> *To:* public-test-infra <public-test-infra@w3.org>;
>> public-aria-test@w3.org
>> *Subject:* Communicating between the WPT test window and a separate
>> application on the client
>>
>>
>>
>> Hi!
>>
>>
>>
>> The ARIA Working Group is investigating various ways to automate the
>> testing of ARIA - which requires testing the accessibility api and its
>> communication with assistive technologies (AT) on a bunch of platforms.
>> Obviously, this is a bit of a challenge.  The current thinking is that a
>> fake AT can be provided on each platform.  The fake AT is started by the
>> tester (or test automation environment) prior to starting a test run.  Once
>> it is running and has found the test window, it will capture the
>> accessibility tree and events as the tests set up and manipulate the DOM.
>> Simple enough.
>>
>>
>>
>> Except, of course, for getting the information from the fake AT back into
>> the test window.  Consider the following (terrible ASCII art) diagram:
>>
>>
>>
>>  WEB BROWSER   <----->  CHILD WINDOW
>>
>>  MAIN WINDOW            FOR INDIVIDUAL TEST
>>
>>                               ^
>>
>>                               |
>>
>>                             MAGIC
>>
>>                               |
>>
>>                               v
>>
>>                          LOCAL AT SHIM
>>
>>
>>
>> The "MAGIC" is where I am playing right now.  Here are some of my ideas:
>>
>>    1. Shim is passed a URI on the WPT server on startup (or finds the
>>    URI when it finds the test window). Communicates with it through a
>>    websocket, and the window in which the test is running communicates with
>>    the same websocket endpoint.  Data is relayed that way.  This seems the
>>    most portable to me.
>>    2. Shim runs a simple HTTP listener.  Child window communicates with
>>    that using HTTP (websocket or simple HTTP GET) on localhost.  This requires
>>    implementing a messaging stack... which doesn't feel very easy on every
>>    platform but is probably do-able.  It might also violate CORS stuff, but
>>    again - do-able.
>>    3. Rely on some sort of native messaging that is platform specific.
>>    This doesn't feel scalable to me.  It would also mean modifying the WPT
>>    part of this any time we wanted to add another platform that had a
>>    different native messaging capability.
>>    4. Use a ServiceWorker in some magical way that I probably don't
>>    understand.  Feels like a steep learning curve.  Also, they don't seem to
>>    be widely supported yet.
>>
>> My hope is that some of you have already thought about a similar problem
>> (there is a native application running on the platform under test that
>> needs to send messages into the test in order to evaluate success or
>> failure).  So... any ideas?
>>
>> --
>>
>> Shane McCarron
>>
>> Projects Manager, Spec-Ops
>>
>>
>>
>
>
> --
> Download the aXe browser extension for free:
>
> Firefox: https://addons.mozilla.org/en-US/firefox/addon/axe-devtools
> Chrome:
> https://chrome.google.com/webstore/detail/axe/lhdoppojpmngadmnindnejefpokejbdd?hl=en-US
>
> Life is ten percent what happens to you and ninety percent how you respond
> to it. - Lou Holtz
>
>


-- 
Shane McCarron
Projects Manager, Spec-Ops
Received on Friday, 15 July 2016 19:01:25 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:34:12 UTC