W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

Re: The use of binary data in any part of HTTP 2.0 is not good

From: Nico Williams <nico@cryptonector.com>
Date: Tue, 22 Jan 2013 03:04:04 -0600
Message-ID: <CAK3OfOhKYLueTfkZXFNQAmU1Q+nPUYwWXgj9xTjCa7jOH7EPBQ@mail.gmail.com>
To: Amos Jeffries <squid3@treenet.co.nz>
Cc: ietf-http-wg@w3.org
On Tue, Jan 22, 2013 at 2:55 AM, Amos Jeffries <squid3@treenet.co.nz> wrote:
> Three possible reasons for keeping it big-endian:
>
> 1) the existing library functionality htons() and friends are big-endian.
> Don't underestimate the benefit using well-known functionality instead of
> having to locate uncommon little-endian converters will give to developers
> in the HTTP/2 rollout.

I'm a fan of big-endian (for your reason #2), but this is not really a
good reason for it any longer.  By now we have lots of libraries for
swabbing that are bettern than hton* and ntoh*().

> That said, I don't have a preference.

Little-endianness is arguably more common, so we should go with that.
But then again, mobile devices seem to be causing an increase in
heterogeneity, so maybe not.  I don't really care.

We could also go with receiver-makes right.  In this case every
message (request, response) must include a BOM and then the receiver
makes right.  This strikes me as fair, if not, perhaps, compelling..

Nico
--
Received on Tuesday, 22 January 2013 09:04:31 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 22 January 2013 09:04:33 GMT