- From: Terje Bless <link@tss.no>
- Date: Wed, 9 May 2001 04:11:48 +0200
- To: W3C Validator <www-validator@w3.org>
[ Taking Liam out of CC since I know he's on w-v. ] On 08.05.01 at 21:41, Liam Quinn <liam@htmlhelp.com> wrote: >On Wed, 9 May 2001, Terje Bless wrote: > >>Hmmm. Maybe Björn or Sean can clarify a bit, but I don't really see any >>big problems provided the charset used is duly registered with IANA and >>marked as suitable for use as a MIME encoding. > >The user agent may not support the encoding, as is commonly the case with >windows-1252 on platforms other than Windows and Mac. Yes, this is a good argument for authors to limit their use of esoteric charsets. However, the spec does punt available charsets to IANA and IANA refers to RFC 2978 <URL:http://www.ietf.org/rfc/rfc2978.txt> and <URL:http://www.iana.org/assignments/character-sets>. A User Agent that does not expect this and deals with it in a reasonable manner cannot be said to be fully compliant with the spec. IMO, of course. User Agents have considerable difficulty with UNICODE! Having just checked the WCAG, it does not deal specifically with character encoding issues. The general sense of it is the same as the general case: use the minimum level of technology that will get the job done. For charset issues, this would be ISO-8859-* where possible and UTF-8 otherwise. In particular, windows-1252 is actually an acceptable compromise in this case because it is more accessible then the equivalent UNICODE; cf. WCAG Guideline #10 <URL:http://www.w3.org/TR/WCAG10/#gl-interim-accessibility>: "Use interim solutions". It's also worth noting that the WCAG has completely different priorities then common wisdom. It explicitly takes into account browser bugs because the bugs exist and must be dealt with in the real world. This as opposed to our little Ivory Tower here where we disregard _all_ implementations in favour of a theoretical model of how this /should/ be implemented.
Received on Tuesday, 8 May 2001 22:14:48 UTC