W3C home > Mailing lists > Public > public-css-testsuite@w3.org > March 2015

Re: Browser detection for shepherd/css test runner

From: Koji Ishii <kojiishi@gmail.com>
Date: Sun, 29 Mar 2015 15:59:39 +0900
Cc: Florian Rivoal <florian@rivoal.net>, Christian Biesinger <cbiesinger@google.com>, "public-css-testsuite@w3.org" <public-css-testsuite@w3.org>
Message-Id: <B075AD41-3499-49CE-B27A-4E93507113D7@gmail.com>
To: Peter Linss <peter.linss@hp.com>
On Mar 28, 2015, at 10:45, Peter Linss <peter.linss@hp.com> wrote:
> The problem is, while there’s divergence in some areas, there’s still shared code in many (most?), and it currently requires human judgement to determine if passes form both count as two independent implementations or not.
> If someone wants to generate a map (keyed by spec section) where there’s divergence (or the converse would be better, since that set isn’t getting bigger), I’d be happy to have the test harness automatically differentiate the results.
> For the record, each result is keyed to the full UA string that generated it, so we can always go back and re-assign results to different products.

I agree with you on this point, but human can always make a judge to count two columns as one if we think the two implementations still share the code for each specification. I think we should separate whether we’d like to count them as one or two from whether we’d like to have separate icons/columns. The former needs human judgement for each feature, but I do not see any down side for doing the latter.

I was actually thinking to propose the same when I’m working on writing-modes fixes and am troubled to figure out which tests fail on Blink.

Could we consider the change?

> No reason to remove old passing implementations… I’ll take passes from Lynx if it helps get a spec to CR.


Received on Sunday, 29 March 2015 07:00:10 UTC

This archive was generated by hypermail 2.4.0 : Friday, 20 January 2023 19:58:21 UTC