W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2012

Re: multiplexing -- don't do it

From: Adrien W. de Croy <adrien@qbik.com>
Date: Fri, 30 Mar 2012 22:52:21 +0000
To: "Roberto Peon" <grmocg@gmail.com>, "Brian Pane" <brianp@brianp.net>
Cc: "Peter L" <bizzbyster@gmail.com>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
Message-Id: <em4b7f42e9-feff-4f63-a499-2019b40a63da@boist>

------ Original Message ------
From: "Roberto Peon" grmocg@gmail.com
>  ยท         SPDY compresses HTTP headers using an LZ history based 
>  algorithm, which means that previous bytes are used to compress 
>  subsequent bytes. So any packet capture that does not include all 
>  the traffic sent over that connection will be completely opaque -- 
>  no mathematical way to decode the HTTP. Even with all the traffic, a 
>  stream decoder will be a tricky thing to build b/c packets depend on 
>  each other.
  I know there's a SPDY decoder plugin for Wireshark, but I'll defer to 
  people more 
  knowledgeable about packet analysis tools to cover that area.
 The OP is right about this, btw. Technically it is possible that 
 you've flushed the window after 2k of completely new data, but there 
 is no guarantee and so interpreting a stream in the  middle may be 
 extremely difficult.
 Seems like a fine tradeoff for the latency savings that we get on 
 low-BW links, though.
I think it basically means compression or any transport level transform 
needs to be able to be switched off when debugging.  Which means 
I have to analyse packet dumps of HTTP most days, as I'm sure do many 
others on this list.  We haven't yet evolved as a species to the stage 
where we don't make mistakes.
I think it's a vitally important facility for discovering 
implementation errors, which is required in many cases to resolve 

Received on Friday, 30 March 2012 22:52:33 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:01 UTC