- From: Roberto Peon <grmocg@gmail.com>
- Date: Fri, 11 Jul 2014 19:11:58 -0700
- To: Greg Wilkins <gregw@intalio.com>
- Cc: HTTP Working Group <ietf-http-wg@w3.org>
- Message-ID: <CAP+FsNcsp16uBaQsSoE3o5CS--wHC7qVDO9OrqRKLSQO=LmGpw@mail.gmail.com>
I like the larger frame max-length so long as the max-framesize setting is there. imho, 64k is big enough given that MTU is likely to stay below 64k for the forseeable future (~9k is the current jumbo frame!). When we venture into things like QUIC, smaller frame sizes will become important, as you really want to fit frames into packets-- that way the encryption context doesn't require HoL blocking when there is packet loss, and things like FEC can work. I don't like things that require buffering at the protocol level-- that should be an implementation decision. I think that people are underestimating the value that announcing limits gives to attackers. It is often better to accept a request/header that you know you won't serve and just sit on it for a while than it is to reject it quickly. Rejecting quickly and announcing limits allows the attacker to easily optimize their attack. Not good. I don't like interleaving-- it multiplicatively increases the DoS surface (and makes it significantly worse than it was with HTTP/1) I do wish that there was a more clear separation of session-layer and HTTP, but I wasn't able to win that argument way back when.. :) I think that continuations are ugly in terms of flag handling, but are otherwise not too difficult to handle. The state machine essentially lends itself to: see HEADERS, stay in HEADERS state until you seen END_HEADERS on a frame. -=R On Fri, Jul 11, 2014 at 6:43 PM, Greg Wilkins <gregw@intalio.com> wrote: > > On 11 July 2014 10:41, Roberto Peon <grmocg@gmail.com> wrote: > >> >> I like some of the parts of what you proposed there, and don't like other >> parts. I'd prefer discussing that in a separate thread, though so we don't >> dilute this conversation too much. >> -=R >> > > Roberto, > > [offlist] > > I'm curious as to what parts of this thought bubble you like and what bits > you don't? > > I think the current inability to reach consensus is due to some earlier > poor design decisions that forced the WG down a path that nobody is really > happy with and I fear that the current disputes are really just fighting > over where to put the deck chairs on the titanic. > > I basically agree with you that headers should be fragmentable and > interleavable( I also think they should be flow controllable ), however > that requires changes to HPACK, which I did not think the WG was willing to > make, hence my proposal for a single frame header (the better of two evils > with regards to continuations). > > However, if the current stalemate and additional concerns you have raised > do inspire a new interest in revisiting HPACK, then perhaps there might > come an opportunity for something a bit more radical and I might consider > putting some effort into preparing a real proposal along the lines of this > thought bubble. > > Hence I'm interested in what you did/didn't like about the idea of mapping > HTTP semantics onto data frames? > > Fundamentally I would like the code that has to deal with all the framing > concerns: buffering, fragmentation, interleaving, priority, aggregation, > flow control, etc. to be application protocol neutral. It should not > know or care if it is transporting headers or data or websockets etc. > All it need to do is give enough indication of where the semantics are so > that actors that do need to know some more semantics (eg proxy routing > algorithm) know where in the stream to look to apply higher level > decoding. I think stream segments are sufficient for that. > > Anyway, if you have any additional personal bandwidth, I'd be interested > in your thoughts. > > I don't think the WG is actually ready/interested in such a proposal at > the moment (hence offlist), but who knows what the future may hold. > > cheers > > > > > >> >> On Thu, Jul 10, 2014 at 4:38 PM, Greg Wilkins <gregw@intalio.com> wrote: >> >>> >>> Roberto, >>> >>> I agree that this is a concern, even if large headers are only 0.01% (or >>> whatever it is ) of the traffic. >>> >>> But I don't think fragmentation on it's own is sufficient. You need >>> fragmentation and flow control. To achieve this we need to stop >>> treating headers as a special case and forget about any deadlines by the >>> end of the year. I would propose that: >>> >>> - We remove HEADERS, CONTINUATION and PUSH_PROMISE from the >>> specification >>> - We retain END_SEGMENT in DATA frames >>> - Streams are created by sending a stream SETTINGs frame with a >>> PROTOCOL parameter! >>> >>> We now have a multiplexed framing layer that is totally devoid of any >>> knowledge of HTTP! The framing supports segmented data streams that are >>> flow controlled and of unlimited size. We then come up with an mapping >>> of HTTP semantics to this framing layer: >>> >>> - HTTP streams start with a SETTINGS frame that has PROTOCOL=h2 >>> - Odd data segments on the stream carry header/trailers. So a >>> Stream with 1 segment is just headers. A stream with 3 segments is >>> headers, data, trailers etc. >>> >>> Now we have to work out how to encode the headers into those data >>> frames. The stateless parts of HPACK are a pretty reasonable start, using >>> Static-H gives a 0.66 compression factor. However, I think there are >>> probably other alternatives that are less order dependent - eg sending the >>> header set mutations only on stream 0 and normal decoding does not mutate >>> the table. If we wanted to make HTTP a bit of a special case, we could go >>> to Linear-H, with 0.31 compression factor, but then decoding of the headers >>> must be done in the order they are sent - making header compression part of >>> the framing layer.... but I could live with a little bit of conflation for >>> efficiency purposes:) >>> >>> With this scheme, we could even support lazy proxies that would send >>> HTTP/1.1 by having PROTOCOL=h1 on a stream and just sending the http/1 >>> bytes unaltered. Websocket could be supported the same way or it too >>> could have it's own segmented data mapping. >>> >>> Even if we go for a less drastic way to do fragmentation of headers, I >>> think the process has to be the same - start with data frame semantics and >>> work out how to transport compressed headers. Don't come up with a >>> different fragmentation/flowcontrol regime based on the content of the >>> frame. >>> >>> >>> cheers >>> >>> >>> >>> >>> >>> >>> >>> >>> On 11 July 2014 06:27, Roberto Peon <grmocg@gmail.com> wrote: >>> >>>> There are two separate reasons to fragment headers >>>> >>>> 1) Dealing with headers of size > X when the max frame-size is <= X. >>>> 2) Reducing buffer consumption and latency. >>>> >>>> Most of the discussion thus far has focused on #1. >>>> I'm going to ignore it, as those discussions are occurring elsewhere, >>>> and in quite some depth :) >>>> >>>> >>>> I wanted to be sure we were also thinking about #2. >>>> >>>> Without the ability to fragment headers on the wire, one must know the >>>> size of the entire set of headers before any of it may be transmitted. >>>> >>>> This implies that one must encode the entire set of headers before >>>> sending if one will ever do transformation of the headers. Encoding the >>>> headers in a different HPACK context would count as a transformation, even >>>> if none of the headers were modified. >>>> >>>> This means that the protocol, if it did not have the ability to >>>> fragment, would require increased buffering and increased latency for any >>>> proxy by design. >>>> >>>> This is not currently true for HTTP/1-- the headers can be >>>> sent/received in a streaming fashion, and implementations may, at their >>>> option, choose to buffer in order to simplify code. >>>> >>>> -=R >>>> >>> >>> >>> >>> -- >>> Greg Wilkins <gregw@intalio.com> >>> http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that >>> scales >>> http://www.webtide.com advice and support for jetty and cometd. >>> >> >> > > > -- > Greg Wilkins <gregw@intalio.com> > http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that > scales > http://www.webtide.com advice and support for jetty and cometd. >
Received on Saturday, 12 July 2014 02:12:27 UTC