Re: The use of binary data in any part of HTTP 2.0 is not good

Maybe it falls upon me to be the voice of concern here :) Depending on
what a "debug" option entails, I'm worried about it being used to
disable a performance feature. As an example of how options can be
dangerous, we've seen intermediaries that strip out Accept-Encoding
headers in order to force responses to be uncompressed (probably so
they can inspect the payloads more easily/cheaply), which is an issue
from a web performance perspective.

Back to the use case, if you're in a position to use the debug option,
is it likely that you would not also be in a position to capture
enough to decode? I'd like to understand the use case so I can
properly weigh the benefit of such an option, in contrast to the cost
that I highlighted above.

On Sun, Jan 20, 2013 at 3:48 PM, Mark Nottingham <mnot@mnot.net> wrote:
>
> On 21/01/2013, at 10:38 AM, "Adrien W. de Croy" <adrien@qbik.com> wrote:
>
>>
>> the thing that will make debugging harder won't be binary vs text, but the inter-dependence of messages.  Especially when it comes to looking through debug logs for issues.
>>
>> On-the-wire, you already need to piece together a TCP stream to see what's going on, so having http messages effectively split over multiple frames (e.g. delta encoding, or compression) only becomes a problem when you don't capture enough to decode.
>>
>> I think it might be worth-while specifying a requirement for a "debug" option for senders of binary messages which turns off all other optimisations, such as caching unchanged headers etc (so they are sent every time).  Just an idea.
>
> That's been brought up a few times, and the reaction has been pretty positive.
>
> Cheers,
>
>
> --
> Mark Nottingham   http://www.mnot.net/
>
>
>

Received on Sunday, 20 January 2013 23:55:46 UTC