Re: ByteString in Web IDL

On Jul 9, 2013, at 21:07 , Boris Zbarsky <bzbarsky@MIT.EDU> wrote:

> On 7/9/13 11:58 PM, Norbert Lindenberg wrote:
>> Why do Web IDL and XMLHttpRequest need ByteString [1, 2]?
> 
> Because of legacy API, basically.  New APIs should not be using ByteString.

So there's a new fundamentally broken type in a specification that's the foundation for all Web APIs, just to make it easier to describe a few semi-broken legacy APIs elsewhere, and you hope that nobody else will use that new type? That doesn't seem like a sound way to write specifications. If XMLHttpRequest needs weird behavior for legacy reasons, then that should be explained in the XMLHttpRequest spec.

>> A default conversion using ISO 8859-1 seems misguided - in general, today's web standards are not shy about recommending UTF-8.
> 
> I think thinking of this as a "conversion using ISO-8859-1" is somewhat wrong.  This is a case where an ES string is not being used as an actual Unicode string but as something more like Uint16Array.

Which was mostly wrong (but not uncommon) in the past, and is even wronger and unnecessary in the future. And the specification gives readers no indication that that's the intended interpretation.

>> HTTP method and header names, BTW, are clearly specified as containing only ASCII characters [7, 8], and so can be represented as DOMString, with exceptions if the strings contain any non-ASCII characters.
> 
> To the extent that we're sure no server-side stuff is violating the HTTP specification here, yes.  I, personally, is sure that there _are_ violations out there.  Whether they're used with XHR is an open question.

In method and header names? Or do you mean in status reasons and header values? It seems HTTPbis declared the latter obsolete because violations of the old specifications for them were common, but they kept the specs for method and header names.

Norbert

Received on Wednesday, 10 July 2013 04:35:37 UTC