Re: Overview of testing in view of CR exit

On 05/14/2013 04:38 PM, Robin Berjon wrote:
> Hi,
>
> based on the discussion we had at the face to face,

Could you summarise those discussions for those who weren't there?

>  I've made a pass
> over the ToC to reflect the notions we had about what is considered
> stable on its own (as per exit criteria), what requires testing, and in
> the latter set what has implementations and/or tests (I took a
> conservative approach to flagging that and will be refining it to add
> more).

You will need to state your assumptions more clearly if you want useful 
feedback. For example I can identify several parts that are marked as 
"interoperable" that have known interoperability problems. Possibly this 
isn't a problem; if this is just an exercise in getting-to-CR in which 
the plan is to sweep the difficult problems under the carpet and do the 
minimum amount needed to make things look good for Process then 
obviously bringing up edge cases that browsers might not fix quickly 
isn't helpful. For other possible goals e.g. improving the long term 
viability of the platform, it is. Therefore it would be good to know 
what level of interop fail you are prepared to accept e.g. is it OK if 
things will normally work but have timing differences between browser? 
Is there some minimum level of usage in the wild that you care about 
(and how do you measure this) etc.

Received on Tuesday, 14 May 2013 16:18:06 UTC