- From: James Graham <james@hoppipolla.co.uk>
- Date: Sun, 03 Nov 2013 21:18:47 +0000
- To: public-test-infra@w3.org
On 03/11/13 19:24, Mihai Balan wrote: > Hello everyone, > > I'm trying to gather some data points from your past experience > developing test suites in your respective working groups (aka: non-CSSWG > people, please chime in :) ) > > Do you spec test feature interoperability too (how does your feature > work when used together with feature X)? How do you manage tests for > feature interoperability, especially when said features have different > maturity levels? > > I know feature interoperability testing sits on a very fine line between > spec testing and implementation testing. However, I think it brings > enough value to not be dismissed as the browsers' implementers solely > responsibility, but rather be encouraged and shared by the W3C. I consider the main point of the testing effort to be improving interoperability between different implementations of the open web platform. Therefore "implementation testing" — excluding areas like UI where implementations may legitimately differ — is the most useful thing that we can do. If people then want to take some subset of the tests produce and use them for some other purpose like "spec testing", that's fine. But writing tests simply to meet formal requirements rather than explicitly to find issues in the platform has all the wrong incentives and tends to produce bad testsuites. Given the above, it should be clear that testing the interaction between features is essential. It is very common to find bugs in the way that different technologies interact, and for such bugs to be an annoyance to authors if left unadressed. If the features are shipping in any implementations the spec-wise maturity is largely academic although, of course, features that are not forced to remain stable by existing content are more likely to change and need their tests updated than features that content already depends on.
Received on Sunday, 3 November 2013 21:19:11 UTC