- From: Tab Atkins Jr. <jackalmage@gmail.com>
- Date: Thu, 12 Jan 2012 10:03:00 -0800
- To: Charles Pritchard <chuck@jumis.com>
- Cc: Glenn Adams <glenn@skynav.com>, Henri Sivonen <hsivonen@iki.fi>, Kenneth Russell <kbr@google.com>, James Robinson <jamesr@google.com>, Webapps WG <public-webapps@w3.org>, Joshua Bell <jsbell@google.com>
On Thu, Jan 12, 2012 at 9:54 AM, Charles Pritchard <chuck@jumis.com> wrote: > I don't see it being a particularly bad thing if vendors expose more > translation encodings. I've only come across one project that would use > them. Binary and utf8 handle everything else I've come across, and I can use > them to build character maps for the rest, if I ever hit another strange > project that needs them. As always, the problem is that if one browser supports an encoding that no one else does, then content will be written that depends on that encoding, and thus is locked into that browser. Other browsers will then feel competitive pressure to support the encoding, so that the content works on them as well. Repeat this for the union of encodings that every browser supports. It's not necessarily true that this will happen for every single encoding. History shows us that it will probably happen with at least *several* encodings, if nothing is done to prevent it. But there's no reason to risk it, when we can legislate against it and even test for common things that browsers *might* support. ~TJ
Received on Thursday, 12 January 2012 18:12:05 UTC