W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

Re: The use of binary data in any part of HTTP 2.0 is not good

From: Nico Williams <nico@cryptonector.com>
Date: Sun, 20 Jan 2013 18:01:41 -0600
Message-ID: <CAK3OfOiXsjKeuwmX+1JuNL7u8fceQxY0nczjy=6HiBWpgVqYxA@mail.gmail.com>
To: William Chan (陈智昌) <willchan@chromium.org>
Cc: HTTP Working Group <ietf-http-wg@w3.org>
On Sun, Jan 20, 2013 at 5:48 PM, William Chan (陈智昌)
<willchan@chromium.org> wrote:
> [...] Yes, maybe some humans will internalize a binary encoding of
> headers and be able to grok hexdumps, but to the vast majority of
> people, it's basically the same.

Nah, we have plenty of packet capture parsers (Netmon, Wireshark,
tcpdump, snoop, ...).  Wireshark, in particular, is very easy to write
new plugins for, and it's portable.

I'm with Roberto too: it's not really true that textual protocols are
easier to debug, at least not now that we have extensible packet
capture inspection tools.  Further, textual protocols may inhibit the
creation of dissectors for them ("it's text already, what you do need
a dissector for?"), which makes them harder to inspect than binary
protocols.

We can probably apply a lot of minimal encodings of header values (and
header names) in a textual way, but the result would be nearly as
incomprehensible (without tools) to a human as a hex dump of a binary
protocol.  So merely trying to do as best as we can while retaining a
textual nature seems likely to lead to either few gains or a protocol
that's as [in]scrutable as a binary version.

Finally, as others have pointed out, human-readable textual protocols
are likely to allow lots of variations that complicate parsers, thus
being a cause of bugs.

Nico
--
Received on Monday, 21 January 2013 00:02:08 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 21 January 2013 00:02:10 GMT