Re: Feature interoperability tests - has it been done before?

Hello everybody,
James, Tobie, thanks for chiming in :)

I only now realise my wording was a little confusing. Some brief
clarifications below.

My main interest was in what would be the best way to handle (from a "test
case management" perspective) tests that exercise multiple features (e.g.
ShadowDOM and the fullscreen API). And more specifically what would be the
best way to do this when these features are developed by different WGs
(e.g. CSSWG & WebApps).

For features that are say, CSS only, this is easier, since there's a build
system that copies the tests in all the test suites they are relevant for.
On the other hand, I'm not exactly sure what happens when a test also
references, say, a new JS API. My main concern here is that test suite
owners are less aware of new tests for their spec, since it involves
scouring other test repos.

Thanks a lot,
Mihai

Mihai Balan | Quality Engineer @ Web Engine team |  mibalan@adobe.com |
+4-031.413.3653 / x83653 | Adobe Systems Romania




On 11/3/13 11:18 PM, "James Graham" <james@hoppipolla.co.uk> wrote:

>On 03/11/13 19:24, Mihai Balan wrote:
>> Hello everyone,
>>
>> I'm trying to gather some data points from your past experience
>> developing test suites in your respective working groups (aka: non-CSSWG
>> people, please chime in :) )
>>
>> Do you spec test feature interoperability too (how does your feature
>> work when used together with feature X)? How do you manage tests for
>> feature interoperability, especially when said features have different
>> maturity levels?
>>
>> I know feature interoperability testing sits on a very fine line between
>> spec testing and implementation testing. However, I think it brings
>> enough value to not be dismissed as the browsers' implementers solely
>> responsibility, but rather be encouraged and shared by the W3C.
>
>I consider the main point of the testing effort to be improving
>interoperability between different implementations of the open web
>platform. Therefore "implementation testing" ‹ excluding areas like UI
>where implementations may legitimately differ ‹ is the most useful thing
>that we can do. If people then want to take some subset of the tests
>produce and use them for some other purpose like "spec testing", that's
>fine. But writing tests simply to meet formal requirements rather than
>explicitly to find issues in the platform has all the wrong incentives
>and tends to produce bad testsuites.
>
>Given the above, it should be clear that testing the interaction between
>features is essential. It is very common to find bugs in the way that
>different technologies interact, and for such bugs to be an annoyance to
>authors if left unadressed. If the features are shipping in any
>implementations the spec-wise maturity is largely academic although, of
>course, features that are not forced to remain stable by existing
>content are more likely to change and need their tests updated than
>features that content already depends on.
>
>

Received on Wednesday, 27 November 2013 10:20:43 UTC