Re: Design FAQs, Was: Our Schedule

The design is more suited for a direct (secure) connection between 
client and servers. Intermediaries aren't really targeted. For example 
an intermediary can't stream a header frame until it knows about the 
complete header block as otherwise a client can block the whole 
connection between the intermediary and the server. I asked about the 
single reference set before and the answer was no because of the memory 
requirements, the reference set can be cleaned with a single op, the and 
a grouping / reference table selection would be necessary otherwise 
which was considered to complicated. On the other side in the HTTP/1.1 
world also usually the headers are processed as a unit, so there would 
be room for improvements.

Regards,
Roland


On 26.05.2014 21:29, Greg Wilkins wrote:
>
> Mark,
>
> thanks for the recap of the argument for small request size.    I know 
> that the reasoning behind every design decision cannot be put into the 
> draft, but it would be really good if  some way could be found that 
> didn't leave such knowledge only in the email archives.    Perhaps a 
> HTTP2 FAQ would avoid re-runs of frequent discussions.
>
> I certain accept the reasoning behind wanting to fit many requests 
> into a single CWIN to avoid round trips.   I also accept that gzip is 
> not suitable for security reasons.
>
> But I think there are many more FAQs needed to explain HPACK and other 
> aspects of HTTP/2
>
>  + why is hpack streaming?  Its design means that common fields like 
> the method are likely to be emitted at the end thus requiring the 
> whole headers to be buffered anyway and the server must apply a max 
> header size anyway.
>  + why a single reference table? Wont this be inefficient for 
> connections that aggregate unrelated streams?
>  + why a dynamic reference table that can be mutated by any stream?   
> reference table(s) that can only be mutated by stream 0 would allow 
> other streams to progress in parallel without serialisation between 
> streams.
>  + Does HPACK really request contiguous header frames without flow 
> control?  It looks like a maximum size will be applied to the initial 
> headers anyway, so with that known head of line blocking can be avoided.
>  + I know it has been explained to me before, the END_STREAM bit that 
> doesn't mean the end of the stream is another FAQ that really needs to 
> be explained.
>
>
> If the WG wants to get more feedback from a wider audience, then they 
> are just going to get questions like these asked again and again 
> unless some effort is made to pro actively explain some of the more 
> surprising aspects of the design.
>
> regards
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On 26 May 2014 19:53, Mark Nottingham <mnot@mnot.net 
> <mailto:mnot@mnot.net>> wrote:
>
>     Michael,
>
>     On 27 May 2014, at 2:35 am, Michael Sweet <msweet@apple.com
>     <mailto:msweet@apple.com>> wrote:
>
>     > Patrick,
>     >
>     > On May 26, 2014, at 10:45 AM, Patrick McManus
>     <pmcmanus@mozilla.com <mailto:pmcmanus@mozilla.com>> wrote:
>     >> ...
>     >> I disagree. the fundamental value of http/2 lies in mux and
>     priority, and to enable both of those you need to be able to
>     achieve a high level of parallelism. Due to CWND complications the
>     only way to do that on the request path has been shown to be with
>     a compression scheme. gzip accomplished that but had a security
>     problem - thus hpack. Other schemes are plausible, and ones such
>     as james's were considered, but some mechanism is required.
>     >
>     > I see several key problems with the current HPACK:
>     >
>     > 1. The compression state is hard to manage, particularly for
>     proxies.
>     > 2. HEADER frames hold up the show (issue #481)
>     > 3. There is no way to negotiate a connection without Huffman
>     compression of headers (issue #485).
>     >
>     > *If* we can come up with a header compression scheme that does
>     not suffer from these problems, it might be worth the added
>     complexity in order to avoid TCP congestion window issues.  But
>     given that we are already facing 3.5 RTTs worth of latency just to
>     negotiate a TLS connection I'm not convinced that compressing the
>     request headers will yield a user-visible improvement in the speed
>     of their web browsing experience.
>
>     The previous discussion that Patrick was referring to has a lot of
>     background.
>
>     In a nutshell, he made an argument for header compression a while
>     back (I can dig up the references if you like), where he basically
>     showed that for a very vanilla page load, merely getting the
>     requests out onto the wire (NOT getting any responses) would take
>     something like 8-11 RTs, just because of the interaction between
>     request header sizes and congestion windows. This assumes that the
>     page has 80 assets (the average is not over 100, according to the
>     Web archive), and request headers are around 1400 bytes (again,
>     not uncommon).
>
>     In contrast, with compressed headers (his experiment was with
>     gzip), you can serialise all of those requests into one RTT,
>     perhaps even a single packet.
>
>     This is a very persuasive argument when our focus is on reducing
>     end-user perceived latency. It’s especially persuasive when you
>     think of the characteristics of an average mobile connection.
>
>     HPACK is not as efficient as gzip, and as we’ve said many times,
>     our goal is NOT extremely high compression; rather, it’s safety.
>     If we could ignore the CRIME attack, we would use gzip instead,
>     and I don’t think we’d be having this discussion.
>
>     Hope this helps,
>
>     --
>     Mark Nottingham http://www.mnot.net/
>
>
>
>
>
>
> -- 
> Greg Wilkins <gregw@intalio.com <mailto:gregw@intalio.com>>
> http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that 
> scales
> http://www.webtide.com  advice and support for jetty and cometd.

Received on Monday, 26 May 2014 22:47:45 UTC