- From: Willy Tarreau <w@1wt.eu>
- Date: Mon, 2 Apr 2012 07:23:53 +0200
- To: Peter L <bizzbyster@gmail.com>
- Cc: "Adrien W. de Croy" <adrien@qbik.com>, Mike Belshe <mike@belshe.com>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
Hi Peter, On Sun, Apr 01, 2012 at 05:33:54PM -0400, Peter L wrote: > It's not just a matter of reading a binary format. SPDY gzip needs a synch point. So there definitely will be times (when the capture does not go far enough back in time) when tools will not be able to decode the compressed headers. > > This is undesirable in my opinion. Seems strange that others do not agree. I'm among the people who are used to plug a USB disk somewhere, fill it with 500 GB of captures then take it back for analysis. In such captures there are generally a few losses which make some traces unexploitable. But you know, you have the same issue when you lose a segment which contains a request or response message, as well as when you lose one segment containing a chunk size in gzip-encoded payload. In practice, network troubleshooting *rarely* happens on a single session and in general you want the full stream to be able to give a diagnostic. I agree with you that when compressing it's harder to resync than it is with clear text. Right now, just tcpdump -X |less, scroll and stop when you notice something which says "HTTP". But quite frankly, I'd rather spend my time adapting the analysis tools than continuing to optimize HTTP parsers at the assembly level to save every single CPU cycle in order to support rare useless cases (eg: missing space after the color or missing CR before the LF). > Thanks for putting up with reading my concerns. Your concerns are valid, but they eventually will be addressed as the protocol standardizes. Regards, Willy
Received on Monday, 2 April 2012 05:24:28 UTC