Re: The use of binary data in any part of HTTP 2.0 is not good

On Sun, Jan 20, 2013 at 6:47 PM, Roberto Peon <grmocg@gmail.com> wrote:

> Text formats are, surprisingly, not easier to debug in my experience.
>
> +1

Beyond debugging, text formats are actually ridiculously hard to get
consistently right - partially because they lend themselves too well to "be
liberal in what you receive policies". HTTP/1 has suffered from  CRLF
injection attacks, content-length bounds check failures, a wide variety of
line ending interop problems, etc..  all of which are derived from its text
roots. Sometimes these problems are unforseen spec issues, but sometimes
they are just derived from assumptions people have because the text format
feels more intuitive than it really is.

While text is convenient to eyeball, it is much harder to get unambiguously
correct especially in a open multiple implementation environment. 32 bits
of big endian is well defined and well bounded; a text string that
represents a quantity requires a lot more information to correctly
interpret.

http://blog.jgc.org/2012/12/speeding-up-http-with-minimal-protocol.html#c5703739431744738432

Frankly, I'd rather talk about byte order.. imo this is an application
level protocol that 98% of the time is going to be consumed by little
endian processors and could easily be defined that way. This of course runs
against tradition and isn't a huge deal computationally, but I'm not aware
of other arguments against giving the byte swap operation of our processors
a day off.

Received on Monday, 21 January 2013 15:04:38 UTC