Re: Our Schedule

If http/2 requires a header compression mechanism as complicated as HPACK,
then I'll stick with http/1.1. I'm willing to grant that *some* compression
mechanism will make things better, but I cannot accept that HPACK is the
best option... And it would seem that I'm not alone in that. Regardless,
the WG will push forward with whatever the WG chooses to push forward with,
I was merely offering my response to Mark's inquiry about the schedule.
On May 26, 2014 7:45 AM, "Patrick McManus" <pmcmanus@mozilla.com> wrote:

>
> On Mon, May 26, 2014 at 10:07 AM, James M Snell <jasnell@gmail.com> wrote:
>
>> Mark has asked for technical arguments. My point of view is this:
>>
>> From day one, there has never been a strong technical argument *in favor*
>> of HPACK... At least not one that justifies the significant increase in
>> complexity. Header compression qualifies as a "nice to have" but is
>> certainly not critical to preserving http semantics or even the operation
>> of the framing protocol. HPACK should not be a normative requirement in
>> http/2. If our concern is about saving bytes on the wire, there are less
>> complicated ways to do so.
>>
>
> I disagree. the fundamental value of http/2 lies in mux and priority, and
> to enable both of those you need to be able to achieve a high level of
> parallelism. Due to CWND complications the only way to do that on the
> request path has been shown to be with a compression scheme. gzip
> accomplished that but had a security problem - thus hpack. Other schemes
> are plausible, and ones such as james's were considered, but some mechanism
> is required.
>
> this is well worn ground. Forgive me if I don't weigh in on every episode
> re-run while we do the implementation work to actually get this tested and
> deployed. Looking forward to testing with everyone some more.
>
> -P
>
>
>
>

Received on Monday, 26 May 2014 15:18:13 UTC