On Thu, Jan 12, 2012 at 9:54 AM, Charles Pritchard <chuck@jumis.com> wrote: > I don't see it being a particularly bad thing if vendors expose more > translation encodings. I've only come across one project that would use > them. Binary and utf8 handle everything else I've come across, and I can use > them to build character maps for the rest, if I ever hit another strange > project that needs them. As always, the problem is that if one browser supports an encoding that no one else does, then content will be written that depends on that encoding, and thus is locked into that browser. Other browsers will then feel competitive pressure to support the encoding, so that the content works on them as well. Repeat this for the union of encodings that every browser supports. It's not necessarily true that this will happen for every single encoding. History shows us that it will probably happen with at least *several* encodings, if nothing is done to prevent it. But there's no reason to risk it, when we can legislate against it and even test for common things that browsers *might* support. ~TJReceived on Thursday, 12 January 2012 18:12:05 UTC
This archive was generated by hypermail 2.3.1 : Friday, 27 October 2017 07:26:38 UTC