- From: Bonner, Matt (IPG) <matt.bonner@hp.com>
- Date: Fri, 4 Apr 2008 21:56:44 +0000
- To: "public-css-testsuite@w3.org" <public-css-testsuite@w3.org>
- Message-ID: <368F79A511563D43ADADF8B99EB82F1B3E553A4D@G3W0637.americas.hpqcorp.net>
A few of us at HP hope to help with CSS 2.1 test suites, especially those related to printing, so we have started following the traffic on the CSS lists. We find the rationales included with resolutions very helpful and thank the CSSWG for publishing them. Regarding "[CSSWG] Resolutions F2F 2007-03: Test Suites and Pending Publications", the rationale triggered an idea: > RATIONALE: Question was "when do we stop working on the test suite?" We > will always find deep technical issues. At some point we have > to stop, publish the REC, and use the errata system from there. > Some people argued that there should be concrete criteria for > when the test suite is "done", but no one offered any usable > criteria. Could measuring code coverage on the relevant browser code help? Obviously the 2.1 test suite should never reach 100% code coverage, but measuring periodically now and as blocks of tests are added should show improvements in coverage, reaching some asymptote. At that point, it seems reasonable to advance 2.1 to proposed recommendation (PR). To restate, code coverage would serve as a relative measurement, not an absolute one. The % of code covered would be different for each browser. But using this tool might also help uncover areas where more tests are needed, or logic errors exist in browser code. regards, Matt -- Matt Bonner Hewlett-Packard Company
Received on Saturday, 5 April 2008 08:15:04 UTC