W3C home > Mailing lists > Public > www-style@w3.org > November 2010

RE: [CSS 2.1] Trying to make sense of CSS test suite results

From: John Jansen <John.Jansen@microsoft.com>
Date: Mon, 22 Nov 2010 16:10:39 +0000
To: Boris Zbarsky <bzbarsky@MIT.EDU>, www-style list <www-style@w3.org>
Message-ID: <C340671BECD4364E8F9EBA27E8E231321AAF4654@DF-M14-05.exchange.corp.microsoft.com>
I have been thinking about the same problem. Ideally we'd have a database that maps the test cases to the part of the spec they are testing in a very clear way, so we could group the tests appropriately and be able to say "This {appropriateScope} is not implementable..." so we could address that entire section of the spec at once, rather than one-off via test cases.

However, I don't think that's appropriate for the 2.1 suite given how much data mining it would require and the cost/benefit seems low considering where we are.

-John

-----Original Message-----
From: www-style-request@w3.org [mailto:www-style-request@w3.org] On Behalf Of Boris Zbarsky
Sent: Thursday, November 18, 2010 2:45 PM
To: www-style list
Subject: [CSS 2.1] Trying to make sense of CSS test suite results

I'm looking at
http://wiki.csswg.org/test/css2.1/results#css21-test-suite-results and the categorization seems somewhat incomplete.  In particular, while the tests that have zero or one passes clearly prevent us from advancing, it seems to me that if there are two tests for a feature and zero or only one implementation that manage to pass both tests that should also bock us from advancing, right?

I agree that this is harder to data-mine, though....

-Boris
Received on Monday, 22 November 2010 16:11:15 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 17:20:34 GMT