W3C home > Mailing lists > Public > public-webapps@w3.org > January to March 2012

Re: String to ArrayBuffer

From: Charles Pritchard <chuck@jumis.com>
Date: Thu, 12 Jan 2012 10:09:06 -0800
Message-ID: <4F0F21C2.10806@jumis.com>
To: "Tab Atkins Jr." <jackalmage@gmail.com>
CC: Glenn Adams <glenn@skynav.com>, Henri Sivonen <hsivonen@iki.fi>, Kenneth Russell <kbr@google.com>, James Robinson <jamesr@google.com>, Webapps WG <public-webapps@w3.org>, Joshua Bell <jsbell@google.com>
On 1/12/2012 10:03 AM, Tab Atkins Jr. wrote:
> On Thu, Jan 12, 2012 at 9:54 AM, Charles Pritchard<chuck@jumis.com>  wrote:
>> I don't see it being a particularly bad thing if vendors expose more
>> translation encodings. I've only come across one project that would use
>> them. Binary and utf8 handle everything else I've come across, and I can use
>> them to build character maps for the rest, if I ever hit another strange
>> project that needs them.
> As always, the problem is that if one browser supports an encoding
> that no one else does, then content will be written that depends on
> that encoding, and thus is locked into that browser.  Other browsers
> will then feel competitive pressure to support the encoding, so that
> the content works on them as well.  Repeat this for the union of
> encodings that every browser supports.
> It's not necessarily true that this will happen for every single
> encoding.  History shows us that it will probably happen with at least
> *several* encodings, if nothing is done to prevent it.  But there's no
> reason to risk it, when we can legislate against it and even test for
> common things that browsers *might* support.

Count me as agnostic. I'm fine with simple. I'd like to see MS and Apple 
chime in on this issue.

Here's the "worst case" as I understand it being presented:

Received on Thursday, 12 January 2012 23:36:18 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:13:30 UTC