- From: Nico Williams <nico@cryptonector.com>
- Date: Sun, 20 Jan 2013 18:01:41 -0600
- To: William Chan (陈智昌) <willchan@chromium.org>
- Cc: HTTP Working Group <ietf-http-wg@w3.org>
On Sun, Jan 20, 2013 at 5:48 PM, William Chan (陈智昌) <willchan@chromium.org> wrote: > [...] Yes, maybe some humans will internalize a binary encoding of > headers and be able to grok hexdumps, but to the vast majority of > people, it's basically the same. Nah, we have plenty of packet capture parsers (Netmon, Wireshark, tcpdump, snoop, ...). Wireshark, in particular, is very easy to write new plugins for, and it's portable. I'm with Roberto too: it's not really true that textual protocols are easier to debug, at least not now that we have extensible packet capture inspection tools. Further, textual protocols may inhibit the creation of dissectors for them ("it's text already, what you do need a dissector for?"), which makes them harder to inspect than binary protocols. We can probably apply a lot of minimal encodings of header values (and header names) in a textual way, but the result would be nearly as incomprehensible (without tools) to a human as a hex dump of a binary protocol. So merely trying to do as best as we can while retaining a textual nature seems likely to lead to either few gains or a protocol that's as [in]scrutable as a binary version. Finally, as others have pointed out, human-readable textual protocols are likely to allow lots of variations that complicate parsers, thus being a cause of bugs. Nico --
Received on Monday, 21 January 2013 00:02:08 UTC