- From: fantasai <fantasai.lists@inkedblade.net>
- Date: Sun, 03 Oct 2010 08:02:00 -0700
- To: "www-style@w3.org" <www-style@w3.org>
Summary:
- Discussed CSS2.1 Implementation Reports, how to get them done,
what's necessary to be done, etc.
- RESOLVED: Adopt CSS3 CR exit criteria for CSS2.1 so that we can
use beta builds
====== Full minutes below ======
Present:
David Baron
Arron Eicholz
Simon Fraser
Sylvain Galineau
Daniel Glazman
John Jansen
Brad Kemper
Peter Linss
Alex Mogilevsky
David Singer
Steve Zilles
<RRSAgent> logging to http://www.w3.org/2010/09/22-CSS-irc
ScribeNick: smfr
Administrative
--------------
no agenda additions
CSS2.1 Implementation Reports
-----------------------------
wanted to hear status from mozilla and opera; no-one is on the call
<glazou> dbaron: can you answer through IRC?
<dbaron> what's the question?
sylvaing: want to hear from apple
<dbaron> Over the weekend, before the template was up, I ran the tests
for chapters 1-4
<dbaron> but it turns out that isn't actually very useful for building
the template, so I'll probably toss that work out
smfr: we have not have resources to go through the test suite, unless
it gets automated
<dbaron> I'm hoping to run the noninteractive parts of the testsuite
through the reftest harness to reduce the tests to unique images
<dbaron> which Opera says should be about ~3000 instead of ~9000 or
something like that
<dbaron> I'm not exactly sure where that leaves me, though; it depends
how much time I'll have.
smfr: we could try to crowdsource running the tests
sylvaing: so someone could fail every test
sylvaing: this is a vendor's report; should be run by the vendor
plinss: maybe we can trust the results if multiple people give the
same answers
<dbaron> Also, I'm probably going to add an additional state to my
implementation report
<dbaron> Since I'm unlikely to be able to figure out whether all the
tests that fail are valid or not
sylvaing: impl. report should explain if the report is not produced
directly by the vendor
<dbaron> so I'll have three failing states instead of two: bug, fail,
and invalid
smfr: maybe we can use the crowdsourced results to focus our testing
sylvaing: you still have to go through the results, so maybe you
don't save that much
smfr: does the harness let you query results for a given user agent?
plinss: tabatkins is working on migrating the HP harness from the w3c server
sylvaing: apple probably won't make 10/15, nor google or mozilla
sylvaing: google WILL make it (according to tabatkins )
<sylvaing> correction: microsoft+google indicated they'd make the date;
apple,mozilla and opera not, it seems
sylvaing: we won't have two implementations for each testcase
<dbaron> BTW, shouldn't you be talking about 10/18 rather than 10/15
given when the test suite and implementation report template
were available?
<smfr> yes, should be 10/18
<sylvaing> yes, 10/18; corrected
<bradk> What about others, such as Prince?
plinss: don't have definite answer for mozilla and opera, iffy for apple
CSS 2.1 CR Exit Criteria
------------------------
relaxed exit criteria to allow use of current betas
smfr: do webkit nightly builds count? not public betas, but are downloadable
bradk: opera have the same problem
plinss: nightlies too unstable
* smfr explains how safari and webkit relate
plinss: nightlies are ok so long as they have been available for a min.
of a month
<dsinger> that's more it. we want that the *feature* is stable, not
that the *build* is old
sylvaing: we have two vendors submitting reports by 10/18. can the
other vendors estimate when they can submit?
smfr: can't for apple
<bradk> Is it true that the feature is stable if it is not prefixed?
<dsinger> so, ask that the test passes in a build at least two weeks
old (and that it hasn't broken in the meantime)
plinss: do we want to relax exit criteria for 2.1?
glazou: if we relax the build criteria, will it help?
sylvaing: it's not the build, it is the cost of running the tests
glazou: we should be pragmatic. we should do whatever we need to make
CSS 2.1 a Rec.
sylvaing: there are other issues
discussion at TPAC will ensue
sylvaing: maybe we can relax the rule for 2 passes on all tests
plinss: do we have to have complete report from each vendor, or just
enough to show 2 impls passing each test
smfr: what happens if tests fail in all impls?
arronei: there are 126 failures in all browsers
<dbaron> Does that 126 number include corrections from when that list
was discussed on public-css-testsuite?
<dbaron> (i.e., errors in some of the tests were pointed out)
sylvaing: how many tests fail in one or both of IE and chrome
(the 2 impl. reports) we have
glazou: does arronei have reports for all browsers
sylvaing: yes, but MS will not submit reports for other vendors
but MS willing to help share to cross-check the results
smfr talked about lack of automation
dsinger: what's the real issue
glazou: it's a problem for tests that don't pass in 2 browsers
sylvaing: let's say MS and chrome submit, and other vendors focus
on failing tests
sylvaing: can we still go with that?
glazou: we can take it to the letter. "we need 2 implementations
for each feature"
dsinger: we've met the spirit of the law, if not necessarily the letter
<sylvaing> (smfr, I think it's the reverse actually..)
<dsinger> or rather, we met the letter of the law, but more
importantly, we also can say we met the spirit
glazou: instead of submitting 4 columns of pass/fail, we list 2
browsers for each
<JohnJansen> it looks like there are about 1800 tests that neither
IE nor Chrome pass
<JohnJansen> or rather about 7600 tests that we both pass
szilles: it's up to the browser vendors to control how they look
JohnJansen: 1800 tests that either IE or chrome fail
arronei: it's only 126
<smfr> we have 1800 lacking two passes (from IE or chrome)
<smfr> so other vendors should focus on those tests
sylvaing: if we had mozilla, what would the number be?
dsinger: how many fail because the test is wrong?
smfr / arronei: those tests are gradually being addressed
dsinger: what about two tests that can't both be passed?
arronei: haven't come across any of those
<dbaron> I've come across two that can't both be passed
smfr: is the feature == test assumption for exit realistic?
<dbaron> but it was due to an error in one of them
<dbaron> feature == test was never the assumption
smfr: a "feature" is covered by a set of tests, maybe we
shouldn't require passes of all tests for a given feature
plinss: we've done that before
sylvaing: question for opera. do they need to submit data for 3 platforms
plinss: it's not necessary, may be helpful if only one platform passes
plinss: cannot count different platforms as different implementations
sylvaing: re: feature vs. test: since we have 20% of tests failing,
it doesn't matter much
plinss: to conclude
plinss: partial reports from some vendors are ok
<sylvaing> any objections ?
no objections
plinss: can we get a list of the 1800 tests that we need reports for
JohnJansen: MS can submit its report on 9/29
we don't know when chrome will submit
plinss: can MS publish an informal list of where other vendors need to focus?
JohnJansen: have to check
JohnJansen: if we have mozilla, that 1800 number goes way down
glazou: 7600 tests pass in both browsers, in both XHTML1 and HTML4,
or just one?
arronei: IE9 beta and Chrome
plinss: can we get resolution on exit criteria? IE9 beta is not good enough
exit criteria currently state "shipping builds"
JohnJansen: by the time of publishing, it will have been out for 30 days
<dbaron> did we previously resolve to change the 2.1 exit criteria to
match the ones we've recently been using for css3 modules?
RESOLUTION: will change exit criteria to 2 publicly available builds
(including nightlies and betas), as long as they have been
available to the general public for 1 month
plinss: should not include experimental builds, or builds made to just
pass a test
plinss: intent should be that the feature should be present in nightlies
for a month
<dbaron> builds along a development line intended for a release?
<dsinger> basically, we have to defend the results with a straight face.
that's the bottom line.
plinss: this is adopting for 2.1 what we have been doing for CSS 3
RESOLVED: adopt current exit criteria for CSS 2.1
sylvaing: have an action to talk to tabatkins to see how chrome is doing,
and share testcases that don't pass in both
sylvaing: any actions on other vendors?
action on other vendors: get implementation reports done; partial
reports are acceptable
plinss: no intention to slip the dates
glazou: arronei, did you run the tests manually at least once?
glazou: how many per day?
arronei: 600 tests in an hour, manually
smfr: i would like some kind of basic harness to come with the test suite
Meeting closed.
Received on Sunday, 3 October 2010 15:03:07 UTC