W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

Re: The use of binary data in any part of HTTP 2.0 is not good

From: Amos Jeffries <squid3@treenet.co.nz>
Date: Tue, 22 Jan 2013 21:55:21 +1300
Message-ID: <50FE53F9.4040800@treenet.co.nz>
To: ietf-http-wg@w3.org
On 22/01/2013 4:04 a.m., Patrick McManus wrote:
> On Sun, Jan 20, 2013 at 6:47 PM, Roberto Peon <grmocg@gmail.com 
> <mailto:grmocg@gmail.com>> wrote:
>     Text formats are, surprisingly, not easier to debug in my experience.
> +1
> Beyond debugging, text formats are actually ridiculously hard to get 
> consistently right - partially because they lend themselves too well 
> to "be liberal in what you receive policies". HTTP/1 has suffered 
> from  CRLF injection attacks, content-length bounds check failures, a 
> wide variety of line ending interop problems, etc..  all of which are 
> derived from its text roots. Sometimes these problems are unforseen 
> spec issues, but sometimes they are just derived from assumptions 
> people have because the text format feels more intuitive than it 
> really is.
> While text is convenient to eyeball, it is much harder to get 
> unambiguously correct especially in a open multiple implementation 
> environment. 32 bits of big endian is well defined and well bounded; a 
> text string that represents a quantity requires a lot more information 
> to correctly interpret.
> http://blog.jgc.org/2012/12/speeding-up-http-with-minimal-protocol.html#c5703739431744738432
> Frankly, I'd rather talk about byte order.. imo this is an application 
> level protocol that 98% of the time is going to be consumed by little 
> endian processors and could easily be defined that way. This of course 
> runs against tradition and isn't a huge deal computationally, but I'm 
> not aware of other arguments against giving the byte swap operation of 
> our processors a day off.

Three possible reasons for keeping it big-endian:

1) the existing library functionality htons() and friends are 
big-endian. Don't underestimate the benefit using well-known 
functionality instead of having to locate uncommon little-endian 
converters will give to developers in the HTTP/2 rollout.

2) big-endian is more intuitive to read. I know we are arguing for tools 
to be used by the masses. But somebody, sometime is going to have to 
eyeball the raw binary to figure out a tricky interop problem. Lets not 
make that job harder than it has to be, because it will probably be one 
of us here doing it.

3) big-endian is more suited to streamed octet interpretation when we 
are defining data fields at less than 32-bit resolution.

That said, I don't have a preference.

Received on Tuesday, 22 January 2013 08:55:59 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:09 UTC