W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2012

Re: Re[4]: multiplexing -- don't do it

From: (wrong string) 陈智昌 <willchan@chromium.org>
Date: Sun, 1 Apr 2012 23:50:24 +0200
Message-ID: <CAA4WUYgLuu9tGGHhcQV+QBkY=VTF-U23M+LXoAA4e9H=rB0k9g@mail.gmail.com>
To: "Adrien W. de Croy" <adrien@qbik.com>
Cc: Peter L <bizzbyster@gmail.com>, Willy Tarreau <w@1wt.eu>, Mike Belshe <mike@belshe.com>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
I agree with Adrien here. Moreover, I think Mike already noted this
elsewhere, but Chromium has a about:net-internals frontend which includes
all the debugging information. It does not need to be opened before the
compression context was established, since it receives the decompressed
data from the network layer which obviously has the compression context.
Other software could do similar things. Therefore, I don't share Peter's
concern here.

On Sun, Apr 1, 2012 at 11:39 PM, Adrien W. de Croy <adrien@qbik.com> wrote:

>  Hi Peter
>  in my experience, the reason packet captures get created in the first
> place, is because someone has noticed a _recurring_ problem and is trying
> to debug it / track it down.
>  therefore it may take a bit more work to get a full capture that does go
> back far enough, but is usually possible, even if initial captures don't.
>  Our proposal for 2.0 doesn't apply gzip to the stream though, but does
> cache headers across a hop, so could be said to suffer from a similar
> problem, since depending how far back a capture goes, you may miss the
> transfer of some headers.
>  It could probably have a mode to disable this per-hop caching though.
>  Adrien
> ------ Original Message ------
> From: "Peter L" <bizzbyster@gmail.com>
> To: "Adrien W. de Croy" <adrien@qbik.com>
> Cc: "Willy Tarreau" <w@1wt.eu>;"Mike Belshe" <mike@belshe.com>;"ietf-http-
> **wg@w3.org <ietf-http-wg@w3.org>" <ietf-http-wg@w3.org>
> Sent: 2/04/2012 9:33:54 a.m.
> Subject: Re: Re[2]: multiplexing -- don't do it
>> It's not just a matter of reading a binary format. SPDY gzip needs a
>> synch point. So there definitely will be times (when the capture does not
>> go far enough back in time) when tools will not be able to decode the
>> compressed headers.
>> This is undesirable in my opinion. Seems strange that others do not agree.
>> Thanks for putting up with reading my concerns.
>> :-)
>> Peter
>> On Apr 1, 2012, at 5:25 PM, "Adrien W. de Croy" <adrien@qbik.com> wrote:
>>> I agree as well, even though it will also cause me some pain.
>>> We've been debugging binary / non-text / non-human-readable protocols
>>> for decades.  DNS and DHCP are 2 that spring immediately to mind.
>>> Common network analysers shouldn't have much trouble decoding what has
>>> been proposed.
>>> Adrien
>>> ------ Original Message ------
>>> From: "Willy Tarreau" <w@1wt.eu>
>>> To: "Mike Belshe" <mike@belshe.com>
>>> Cc: "Peter L" <bizzbyster@gmail.com>;"ietf-**http-wg@w3.org<ietf-http-wg@w3.org>"
>>> <ietf-http-wg@w3.org>
>>> Sent: 1/04/2012 4:39:43 p.m.
>>> Subject: Re: multiplexing -- don't do it
>>>> On Fri, Mar 30, 2012 at 03:22:12PM +0200, Mike Belshe wrote:
>>>>> What is "transparency on the wire"?  You mean an ascii protocol that
>>>>> you
>>>>> can read?  I don't think this is a very interesting goal, as most
>>>>> people
>>>>> don't look at the wire.
>>>> I agree with you here Mike, despite being used to look at network
>>>> captures
>>>> all the day and testing proxies with "printf|netcat" at both ends. But
>>>> we
>>>> must admit that if developers need tools, they will develop their tools.
>>>> Having an HTTP option for netcat would work well, or even having an
>>>> 1.1-to-2.0
>>>> and 2.0-to-1.1 message converter on stdin/stdout would do the trick. So
>>>> I
>>>> prefer to lose the ability to easily debug and have something efficient
>>>> than
>>>> the opposite. And it costs me a lot to say this :-)
>>>> Willy
Received on Sunday, 1 April 2012 21:50:54 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:13:59 UTC