- From: Alexey Feldgendler <alexey@feldgendler.ru>
- Date: Mon, 04 Jun 2007 13:17:58 +0200
On Mon, 04 Jun 2007 12:34:56 +0200, Henri Sivonen <hsivonen at iki.fi> wrote: >> Including it in a few encoding detection algorithms is no big deal on >> us implementers: as the spec stands we aren't required to support it >> anyway. All the spec requires is that we include it within our encoding >> detections (so, if we don't support it, we can then reject it). > What's the right thing for an implementation to do when UTF-32 is not > supported? Decode as Windows-1252? Does that make sense? Seems like a general question: what's the right thing to do when the document's encoding is not supported? There isn't a reasonable fallback for every encoding. Also, even for those encodings for which a single-byte encoding like Windows-1252 can be a reasonable fallback, it's doesn't seem wise to me to mandate the use of Windows-1252 (or any other fixed encoding) as the fallback. Some software, especially in devices, already exists that only supports one or several encodings, and these are the most important ones in the local market (e.g. Japanese in devices sold in Japan). -- Alexey Feldgendler <alexey at feldgendler.ru> [ICQ: 115226275] http://feldgendler.livejournal.com
Received on Monday, 4 June 2007 04:17:58 UTC