RE: Please check out I18N test suite

Hi Ian.
Many thanks for these comments. See my embedded notes.

> -----Original Message-----
> From: Ian Hickson [mailto:ian@hixie.ch] 
> Sent: 30 December 2003 01:03
> To: Richard Ishida
> Cc: 'Bert Bos'; public-i18n-geo@w3.org
> Subject: Re: Please check out I18N test suite
> 
> 
> On Tue, 9 Dec 2003, Richard Ishida wrote:
> >
> > I have been converting our i18n test pages to use the same model as 
> > the CSS tests.  I'd be grateful if you could look at two of 
> the tests 
> > quickly so that I can ensure I'm heading in the right direction.
> 
> Sorry it took me so long to reply. I hope my comments are 
> still useful.

Yes they are!  Sorry for taking so long to reply to your reply ;-)  And many
thanks for taking so much time over this.


> 
> Overall these are very good tests. I imagine the CSS working 
> group might want to borrow some of them for the CSS2.1 test 
> suite when we start collecting tests for that. :-)

I was hoping that might be the case. :)  I'm interested in also knowing what
you are developing that we could scrounge. :-)



One thing I should say as a preface to my following comments is that, a
short while after sending you my initial request for review I spent a lot of
time considering what kind of tests we needed to provide, and how I should
approach this given very real practical constraints on my time.  I'm sorry,
it would have helped you to say this earlier, but I wasn't expecting you to
reply by that time. I'll number these points so I can refer back later:

[1] The audience for this format is content authors.  We are not testing
against a specification. We are testing to be able to say whether the
advice, warnings or observations we make in our Authoring Techniques doc are
applicable to various browsers.  So, for example, we are interested in
whether the scroll bar flips to the left when dir="rtl" is applied to the
html header - even though this is a browser quirk, since it is something our
readers will come across and want to know about.

[2] One of my aims in the current approach to writing tests is to provide
pages that are educational and pretty easily understood by less technical
people than we.  This is one reason for grouping tests on to a single page,
although I've tried to design the page so that it is not too hard to create
CSS-like tests from what we've done when the need arises.

[3] Another reason for grouping stuff on a page is simply that I currently
don't have the bandwidth to develop the ideal solution.  Again, I've tried
to lean where possible to a situation that allows for small focussed tests
to be carved out of the page.  (Things may improve as we may be able to
bring in additional resources from a member.) I've tried to label stuff that
could be hidden from a QA engineer.



> 
> 
> > The tests I have converted so far are:
> >
> > http://www.w3.org/International/tests/test-css-lang.html
> 
> Now redirects to
>    http://www.w3.org/International/tests/sec-css-lang-1.html
> 
> The main change I would suggest on this page would be to 
> remove the parts of the tests that explain what is going on, 
> e.g. "The css says: *:lang(es) { color:green; }" and "The 
> lang and xml:lang attributes are set to "es".".

I'd like to keep these for the educational reasons mentioned above, but I
should certainly grey out some of the text.

> 
> Also, the tests that should "remain black" shouldn't -- if 
> the test page has tests that should go green, then they 
> should _all_ go green. The best thing to do there is make two 
> pages, one for the "positive" tests, and one for the 
> "negative" tests. In the negative tests, you style them by 
> default green, then make the rules make them red if they match.

Yep.

> 
> Finally, note that if a page is sent as text/html, it is 
> HTML, whether or not the DOCTYPE claims otherwise, and UAs 
> will therefore (correctly) not interpret xml:lang attributes. 
> Thus the test that checks that xml:lang is applied is wrong 
> as long as the test is sent as text/html.

Yes. I should fix that, and I should set up some XHTML1.1 and XHTML 1.0
served as XML tests.


> 
> Same comments apply to -2 and -3.
> 
> 
> > http://www.w3.org/International/tests/test-bidi-blocks.html
> 
> Redirects to:
>    http://www.w3.org/International/tests/sec-dir-1.html
> 
> This test seems pointless -- if the user agent can do 
> whatever it wants and still claim compliance, why test it? 
> (Note that no specification requires that the UA use 
> scrollbars, even.)

See point [1] above.


> 
> 
> | http://www.w3.org/International/tests/sec-dir-2.html
> 
> Remove text like "The blocks here all inherit RTL 
> directionality from the <html> element.", or at least comment 
> it out. Most QA engineers don't care, or understand, the 
> tests -- they just want to see if the UA passes or fails. 
> Once the test is found to fail, then they will always look at 
> the source -- often, at least with tests I normally work 
> with, it turns out the test is wrong. Thus the test's claims 
> can never be trusted. :-)

Yes, I understand.  My audience is not only QA engineers, but I've tried to
make it easy to adapt the tests for them later.


> 
> Similarly, for the tests on this page, remove the sample 
> markup -- if the tester wants to know the markup, he'll look 
> at the source.
> 
> The final thing on this test page is that while reference 
> renderings are quite useful, on my system they are radically 
> different from the actual rendering, due to differences in 
> fonts, anti-aliasing, and so forth. (And since I don't read 
> arabic, I actualy am having trouble working out if the 
> characters are the same.)
> 
> Instead of a reference image, it might be better to have a 
> second way of rendering the same content, for example using 
> bidi overrides or left-to-right tables with no borders, with 
> each span in its own cell (a commonly used way of showing the 
> expected results of bidi rendering).

Unfortunately I think that bidi overrides don't necessarily work on the UA,
and table cells can be ordered differently depending on whether the UA
supports this stuff or not.


> 
> Note, as an aside, that the alt texts of the images are very 
> poor -- when read on an aural browser, the tests would sound 
> like this: "The following should look like this. 
> blablaW3Cblabla. The following should look like this. 1 2 3 
> blablaW3Cblabla blablaW3Cblabla blablaW3Cblabla". That's 
> probably not what you intended! :-) The alt text you give is 
> in fact better suited for the title attribute.

The alt text is repeated in the title attr. I think that there is little
point in using an aural browser on most of these tests, is there?  I could
certainly be wrong, but it seems to me that the results all require visual
examination (and in the case of bidi, there's nothing to test as the text
would just be read out in logical order).


> 
> The same comments apply to -3 and -4.
> 
> 
> | http://www.w3.org/International/tests/sec-inline-bidi-1.html
> 
> Drop the <h2> headers and anything with class=notes, for the 
> same reasons as given for the previous tests. The notes about 
> using an alternative to reference rendering images and better 
> alt text apply here too.

Understood. For QA engineers I was expecting that we should make different
files where each section is a separate file (without grey text or headings.)


> 
> The test that says "Check the text between quotes is in the 
> same order." should just be "Check looks the same (apart from 
> font differences)." as for the others, since that applies 
> here too. Same with "Check that all characters run in the 
> same direction.", that should just be the same as the other 

Yep.  Seems I already fixed some of those, but missed the one at the end.


> tests. The exclamation mark ones make sense, since it is 
> sensible to draw attention to the likely problem in those 
> cases. However, I would suggest making two test pages for 
> this, one for the cases where the tests should just look the 
> same, and one for the exclamation mark cases. That would make 
> it easier to run through the tests quickly.
> 
> 
> | http://www.w3.org/International/tests/sec-text-transform-1.html
> 
> As per earlier comments, I would suggest removing most of the 
> introduction bit (or putting it in comments) and the 
> class=notes bits and <h2> headers.
> 
> It would probably make sense to split this test into one test 
> per block of codepoints.

Yes. For QA folks.


> 
> It might also be easier to use this test if the characters 
> were side by side, like this:
> 
>    a  a
>    b  b
>    c  c
>    d  d
> 
> ...etc. For example, using a table. This might make the test 
> long though (maybe several groups of columns?).

Mmm. Lots of work to do that :(


> 
> 
> | http://www.w3.org/International/tests/sec-list-style-type-1.html
> 
> Split these into one test page per block.
> 
> In general, as you might guess, I'm going to suggest removing 
> most of the explanatory text in these tests. Specifically, 
> removing the "one, should show" bits is probably best, so 
> that the tests just look like:
> 
>    Each line should show the same text twice.
> 
>     a. a
>     b. b
>       a. a
>       b. b
>       c. c
>     d. c
>     e. d

Yep. Good idea.


> 
> (Oops! That UA has a bug, as is obvious from the mismatched 
> letters at the end.)

Interesting.  What UA is that?


> 
> The cases that are undefined are probably not overly useful 
> in a testcase, since they don't test anything.

Yes, for QA engineers.  They are useful for content authors though.


> 
> | http://www.w3.org/International/tests/sec-idn-1.html
> | http://www.w3.org/International/tests/sec-idn-2.html
> 
> These tests are good, except I would encourage you to use a 
> destination page more like this:
> 
   http://www.hixie.ch/tests/adhoc/html/flow/object/pass.html

> That makes it easier to determine what is going on for the tester.

Right.




| http://www.w3.org/International/tests/sec-utf8-signature-1.html

> The second control should say "These characters should display as
> described: A-tilde, copyright sign ..." instead of saying what they should
_not_ look > like. :-)

Ah, actually I tried it that way originally, but the outcome may depend on
the default encoding used by the UA.

> The "Additional test", as written, is not a test, but what I call a "demo"
(it > doesn't say what _should_ happen, it merely demonstrates what _does_
happen). 

Yes.

>Since the behaviour is well defined for that test, I think it should define
what _should_ happen (namely, three "random" characters should appear on the
first line).

> (And, as for the other tests, I would generally recommend cutting back on
the descriptive text.)


| http://www.w3.org/International/tests/sec-ruby-markup-1.html
| http://www.w3.org/International/tests/sec-ruby-markup-2.html

> No new comments beyond those that are mentioned for earlier tests.


I>  hope that helps. Let me know if you have any questions.


Once again, many thanks for this !  If you have a moment, I'd like to chat
with you during the Tech Plenary about how we could collaborate on future
tests, since there's likely to be some overlap.

1047E:   SHAVIAN LETTER IAN (Do I win a prize? ;)




Cheers,
-- 
Ian Hickson                                      )\._.,--....,'``.    fL
U+1047E                                         /,   _.. \   _\  ;`._ ,.
http://index.hixie.ch/                         `._.-(,_..'--(,_..'`-.;.'

Received on Thursday, 26 February 2004 15:07:29 UTC