Re: Overview of testing in view of CR exit

On 05/14/2013 12:17 PM, James Graham wrote:
> On 05/14/2013 04:38 PM, Robin Berjon wrote:
>> Hi,
>>
>> based on the discussion we had at the face to face,
>
> Could you summarise those discussions for those who weren't there?

Take a look at the minutes (apparently there was a dot before the topic 
line, so it doesn't show up in the TOC):

http://www.w3.org/2013/04/24-html-wg-minutes.html#item07

>>  I've made a pass
>> over the ToC to reflect the notions we had about what is considered
>> stable on its own (as per exit criteria), what requires testing, and in
>> the latter set what has implementations and/or tests (I took a
>> conservative approach to flagging that and will be refining it to add
>> more).
>
> You will need to state your assumptions more clearly if you want useful
> feedback. For example I can identify several parts that are marked as
> "interoperable" that have known interoperability problems. Possibly this
> isn't a problem; if this is just an exercise in getting-to-CR in which
> the plan is to sweep the difficult problems under the carpet and do the
> minimum amount needed to make things look good for Process then
> obviously bringing up edge cases that browsers might not fix quickly
> isn't helpful. For other possible goals e.g. improving the long term
> viability of the platform, it is. Therefore it would be good to know
> what level of interop fail you are prepared to accept e.g. is it OK if
> things will normally work but have timing differences between browser?
> Is there some minimum level of usage in the wild that you care about
> (and how do you measure this) etc.

What the minutes may or may not adequately capture is that we spent some 
time on this question and I'm not sure that everybody in the room 
converged on the same criteria.  In fact, the minutes capture a number 
of different proposals for the criteria.

Having different criteria is OK if we we collectively end up with the 
same result even if we individually come to that conclusion for 
different reasons.

As you indicated that you can identify several parts that have known 
interoperability problems, I'd like to suggest that you start with 
identifying a small number that -- in your opinion -- are the most 
egregious and state why you believe that the case and see where the 
discussion goes from there.

In particular, I'd like to highlight a comment by paulc:

... i'd like to go through and identify sections for which we think no 
further testing is required
... and publish that list
... and say "this is a call for objection"
... if someone thinks there's a problem ..., then they can object w/ a 
test case

Knowing you, asking for a test case to back up your assertion that there 
is an issue is not likely to be problem :-)

- Sam Ruby

Received on Wednesday, 15 May 2013 10:25:31 UTC