W3C home > Mailing lists > Public > public-multilingualweb-lt@w3.org > July 2012

RE: ACTION-139 options for test suite design

From: Yves Savourel <ysavourel@enlaso.com>
Date: Mon, 30 Jul 2012 22:06:01 +0200
To: "'Dominic Jones'" <Dominic.Jones@scss.tcd.ie>, <public-multilingualweb-lt-tests@w3.org>
CC: "'Multilingual Web LT Public List'" <public-multilingualweb-lt@w3.org>
Message-ID: <assp.05589cca61.assp.0558ef898d.010601cd6e8e$c3267020$49735060$@com>
Hi Dom, all,

The overall plan looks fine to me.

I would just stress that the sooner we get a standard output the better. And mid august is very good :)


> 1) We plan to provide an initial static test suite, like that 
> provided for 1.0, (http://www.w3.org/International/its/tests/) 
> but after this test suite is up and running we’ll look at 
> building a more dynamic, forward facing, validator, 
> like provided at http://validator.w3.org/i18n-checker/ 
> (Short term goal test-suite, long term validator.)

Interesting. I suppose one could easily validate the ITS syntax, but how would you validate that a tool processes the input as expected?

You could develop your own processor obviously, but then since you can't control what another processor outputs, how could you compare your results with the output of the tested tool? Would the input of the validator be the same output format used in the test suit? Just wondering how far real-life production tools would be willing to go to validate their results.

Cheers,
-yves




From: Dominic Jones [mailto:Dominic.Jones@scss.tcd.ie] 
Sent: Friday, July 27, 2012 1:27 PM
To: public-multilingualweb-lt-tests@w3.org
Cc: Multilingual Web LT Public List
Subject: Fwd: ACTION-139 options for test suite design

(Moving this conversation and thread over to a new mailing list public-multilingualweb-lt-tests@w3.org This list is specifically for publishing all input examples, expected outputs and developments in test suite design.) 
 


Hi Yves, Felix, Others...
 
I wanted to email the group to update on test-suite design, before addressing Yves comments. For those unaware: an important note regarding the ITS 1.0 test suite is that a gold standard input and expected output is provided for each data category. Implementers execute against the gold standard and in each case this is compared to the expected output. 

In terms of the 2.0 test suite I have some key points to propose and would like your feedback:
 
1)    We plan to provide an initial static test suite, like that provided for 1.0, (http://www.w3.org/International/its/tests/) but after this test suite is up and running we’ll look at building a more dynamic, forward facing, validator, like provided at http://validator.w3.org/i18n-checker/ (Short term goal test-suite, long term validator.)

2)    The 1.0 data categories and tests will carry over into 2.0. All 1.0 categories (excluding Ruby and Directionality) will have new HTML5 gold standard input and expected outputs published as part of the new 2.0 test suite. These will be driven from the spec document, using an iterative process.
 
3)    For the middle of August XML and HTML5 gold standard input and expected output will be provided for the new 2.0 categories (Domain, Locale Filter and External Resource). These will be combined with the updated 1.0 data categories to form the new ITS 2.0 test suite, published and hosted.  
 
4)    At the Prague meeting we’ll get update commitments from implementers as to the categories they are willing to test and start rolling out testing against implementations.  
 
Point 4 leads nicely to Yves email about the previous manual comparison of expected output against each implementation. It’s a very valid point especially given that 2.0 has more data categories and more implementations. I’d like to shelve this for now as we’re (over the month) primarily looking at gold standard input / expected output design. However at the end of August I’d like to look specifically at how we compare and simplify the output of various implementations against our expected output.
 
I hope that clears things up a little, please provide your thoughts.
 
Dom.



--
Dominic Jones | Research Assistant 
KDEG, Trinity College Dublin, Ireland.
Work: + 353 (0) 1896 8426 
Mobile: + 353 (0) 879259719 
http://www.scss.tcd.ie/dominic.jones/




Begin forwarded message:


Resent-From: public-multilingualweb-lt@w3.org
From: Felix Sasaki <fsasaki@w3.org>
Subject: Re: ACTION-139 options for test suite design
Date: 27 July 2012 10:55:21 IST
To: Yves Savourel <ysavourel@enlaso.com>
Cc: public-multilingualweb-lt@w3.org

+1. The nodelist-with-its-information.xml is based on what we did for the ITS 1.0 test suite, but making this simpler sounds like a good plan.

Felix
2012/7/27 Yves Savourel <ysavourel@enlaso.com>
Hi Felix, Dave, all,

Just a few notes about testing:

I was starting to look at producing output similar to Felix's nodelist-with-its-information.xml and I was wondering if outputting something that is more easily comparable would not be simpler.

You need to use special tools to compare two XML documents (to ignore whitepace, etc.), while maybe a simple tab-delimited list of nodes and the expected info (outputType, output) can be very easily compared. Just a thought.

Another one: we should probably sort the list of the attribute nodes on output as different engine will give you different order.

Cheers,
-ys







-- 
Felix Sasaki
DFKI / W3C Fellow
Received on Monday, 30 July 2012 20:06:52 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:31:47 UTC