W3C home > Mailing lists > Public > public-css-testsuite@w3.org > November 2007

Questions about length unit tests

From: Dan Kennedy <danielk1977@gmail.com>
Date: Wed, 31 Oct 2007 18:35:31 +0700
To: public-css-testsuite@w3.org
Cc: KOBAYASI Hiroaki <hkoba@t3.rim.or.jp>
Message-Id: <1193830532.10019.30.camel@linux-7qa0.site>


I'm using an html4 build of the test suite to test Hv3, the
tcl/tk web browser. Have been able to find and fix many bugs
already. Thanks!

Checked out a fresh copy today. First question is about the test:


what encoding should the UA assume this test uses? 

For me, it only works with is0-8859-1, not utf-8. The problem is 
that in the Ahem font, the byte sequence 0xC3, 0x89 produces a single
glyph with a height of about 0.8ex, not the 1ex required. With
iso-8859-1, I get two glyphs, each 1ex high (test passes).

Then, in this test:


inside the <div class="zero"> block we have 0x20, 0xC2, which is
a non-breaking space in utf-8 (as the author intended), but not
in iso-8859-1. So I'm wondering if the UA is supposed to auto-detect
this? How does it know the encoding for each individual test file?

Also, in the same file (t040302-c61-rel-len-00-b-ag.htm), we have

   <div class="one"> X </div>
   <div class="two"> X </div>

and CSS:

   .one {margin-left: 3em;}
   .two {margin-left: 3.75ex;}

where the author intends that the two divs produce the same output.
I would have figured the 'X' glyph in div "two" would be 0.75ex to 
the right of the one in div "one" (and that's what the browsers I
have do). What am I missing here?

Received on Thursday, 1 November 2007 01:52:53 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:13:17 UTC