Fwd: ACTION-139 options for test suite design

(Moving this conversation and thread over to a new mailing list public-multilingualweb-lt-tests@w3.org This list is specifically for publishing all input examples, expected outputs and developments in test suite design.) 
 


Hi Yves, Felix, Others...
 
I wanted to email the group to update on test-suite design, before addressing Yves comments. For those unaware: an important note regarding the ITS 1.0 test suite is that a gold standard input and expected output is provided for each data category. Implementers execute against the gold standard and in each case this is compared to the expected output. 

In terms of the 2.0 test suite I have some key points to propose and would like your feedback:
 
1)    We plan to provide an initial static test suite, like that provided for 1.0, (http://www.w3.org/International/its/tests/) but after this test suite is up and running we’ll look at building a more dynamic, forward facing, validator, like provided at http://validator.w3.org/i18n-checker/ (Short term goal test-suite, long term validator.)

2)    The 1.0 data categories and tests will carry over into 2.0. All 1.0 categories (excluding Ruby and Directionality) will have new HTML5 gold standard input and expected outputs published as part of the new 2.0 test suite. These will be driven from the spec document, using an iterative process.
 
3)    For the middle of August XML and HTML5 gold standard input and expected output will be provided for the new 2.0 categories (Domain, Locale Filter and External Resource). These will be combined with the updated 1.0 data categories to form the new ITS 2.0 test suite, published and hosted.  
 
4)    At the Prague meeting we’ll get update commitments from implementers as to the categories they are willing to test and start rolling out testing against implementations.  
 
Point 4 leads nicely to Yves email about the previous manual comparison of expected output against each implementation. It’s a very valid point especially given that 2.0 has more data categories and more implementations. I’d like to shelve this for now as we’re (over the month) primarily looking at gold standard input / expected output design. However at the end of August I’d like to look specifically at how we compare and simplify the output of various implementations against our expected output.
 
I hope that clears things up a little, please provide your thoughts.
 
Dom.



--
Dominic Jones | Research Assistant 
KDEG, Trinity College Dublin, Ireland.
Work: + 353 (0) 1896 8426 
Mobile: + 353 (0) 879259719 
http://www.scss.tcd.ie/dominic.jones/





Begin forwarded message:

> Resent-From: public-multilingualweb-lt@w3.org
> From: Felix Sasaki <fsasaki@w3.org>
> Subject: Re: ACTION-139 options for test suite design
> Date: 27 July 2012 10:55:21 IST
> To: Yves Savourel <ysavourel@enlaso.com>
> Cc: public-multilingualweb-lt@w3.org
> 
> +1. The nodelist-with-its-information.xml is based on what we did for the ITS 1.0 test suite, but making this simpler sounds like a good plan.
> 
> Felix
> 
> 2012/7/27 Yves Savourel <ysavourel@enlaso.com>
> Hi Felix, Dave, all,
> 
> Just a few notes about testing:
> 
> I was starting to look at producing output similar to Felix's nodelist-with-its-information.xml and I was wondering if outputting something that is more easily comparable would not be simpler.
> 
> You need to use special tools to compare two XML documents (to ignore whitepace, etc.), while maybe a simple tab-delimited list of nodes and the expected info (outputType, output) can be very easily compared. Just a thought.
> 
> Another one: we should probably sort the list of the attribute nodes on output as different engine will give you different order.
> 
> Cheers,
> -ys
> 
> 
> 
> 
> 
> 
> 
> -- 
> Felix Sasaki
> DFKI / W3C Fellow
> 

Received on Friday, 27 July 2012 11:26:35 UTC