W3C home > Mailing lists > Public > public-webapps@w3.org > January to March 2010

Re: XHR LC comment: header encoding

From: Jonas Sicking <jonas@sicking.cc>
Date: Mon, 4 Jan 2010 23:29:53 -0800
Message-ID: <63df84f1001042329j7cbf7f25w24e90af9130985e3@mail.gmail.com>
To: Anne van Kesteren <annevk@opera.com>
Cc: Boris Zbarsky <bzbarsky@mit.edu>, Julian Reschke <julian.reschke@gmx.de>, WebApps WG <public-webapps@w3.org>
On Mon, Jan 4, 2010 at 9:51 PM, Anne van Kesteren <annevk@opera.com> wrote:
> On Mon, 04 Jan 2010 21:57:34 +0100, Boris Zbarsky <bzbarsky@mit.edu> wrote:
>>
>> If we don't have a requirement to preserve any possible JS string via this
>> API, then we probably have more flexibility..
>
> I don't think we have that requirement.
>
> I tested Opera a bit further and it seems to simply remove the first byte of
> a 16-bit code unit on setting. So e.g. U+FFFD becomes FD and U+033A becomes
> 3A. (This seems to match what you call byte-inflation.) I personally quite
> like this. It is very predictable and allows you to submit any valid HTTP
> header.
>
> If Gecko can switch back to this behavior as well other browsers are
> probably willing to follow. Unless there are strong objections I will define
> this behavior in the specification. I.e. byte-inflation for both setting and
> getting headers.

Wouldn't it then be better to throw for any non ASCII characters? That
way we don't restrict ourself for when (if?) IETF defines an encoding
for http headers.

At the very least, throwing if the upper byte is non-zero seems like
the right thing to do to prevent silent data loss.

/ Jonas
Received on Tuesday, 5 January 2010 07:30:46 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 18:49:36 GMT