W3C home > Mailing lists > Public > public-html-media@w3.org > August 2013

RE: HTML5 Test Development Estimation

From: James Graham <james@hoppipolla.co.uk>
Date: Tue, 06 Aug 2013 09:15:57 -0700
To: Kevin Kershaw <K.Kershaw@cablelabs.com>
Cc: Tobie Langel <tobie@w3.org>, public-test-infra <public-test-infra@w3.org>, <public-html-media@w3.org>, "'public-html-testsuite@w3.org'" <public-html-testsuite@w3.org>, Takashi Hayakawa <T.Hayakawa@cablelabs.com>, Brian Otte <B.Otte@cablelabs.com>, Nishant Shah <N.Shah@cablelabs.com>
Message-ID: <0753891cd0a28063e03f04cd6bf5d693@webmail.webfaction.com>
On 2013-08-05 09:33, Kevin Kershaw wrote:

> 2) We were aware of the tests in the GIT directories under
> web-platform-tests\old-tests\submission.  Besides Opera, there look to
> be applicable media tests from Google and Microsoft as well.  We
> understood all these required review and some validation before they
> could enter the "approved" area.  It was our intention to review and
> use these as we could.  That said, we hadn't undertaken an extensive
> review of anything in this area when I wrote the previous email.
> We've looked a bit more now.

Note that in general /submission/ directories are not used any more; 
the few that are left are legacies of the older process and should have 
Pull Requests to remove them, review the tests, and move the tests to 
the correct location. Therefore looking at the list of open PRs, or list 
of open reviews, is the right way to find out what's pending.

> WRT the Opera tests, I assume that they're good tests and correctly
> validate important parts of html5 media behavior.  I am concerned that
> there's no apparent traceability back to the spec level requirements
> and not much embedded comment info in the test source.  In comparison,
> the tests under the Google and Microsoft submissions use the XML href
> tag to point back into the spec (although not always w/ the precision
> we'd like).  Without traceability, it's really tough to assess how
> much of the spec you have covered w/ testing.  Having some data about
> spec coverage is important to us.

I agree that kind of data is useful. However there is a tradeoff in 
forcing too much metadata into each test. The more time people have to 
spend annotating tests, the less time they will spend actually writing 
tests. This can be a significant overhead when there are lots of tests 
that are quite easy to write. Also, because test writing is creative, 
fulfilling, work, but adding metadata is not, making the kind of people 
that write the best tests do lots of annotation can discourage them from 
staying in QA-type roles. This is obviously a disaster.

Furthermore I note that the there isn't a total correlation between 
steps in the spec and implementation code, so knowing that you have 
covered an entire spec with one test per assertion doesn't mean that you 
have done a good job of covering the actual code. One day I hope to hook 
up gcov to Gecko/Blink/WebKit and investigate empirical measures of 
actual code coverage for particular testsuites.

Anyway, my point is not that knowing what a test tried to cover isn't 
helpful; it is. But it also has a cost that has to be balanced. 
Traditionally vendors have been biased towards the "write lots of tests, 
don't worry too much about annotation" position.
Received on Tuesday, 6 August 2013 16:17:43 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 15:48:40 UTC