Re: HEADERS and flow control

On more implication:

You can still get deadlocked if a sender wishes to send a header
representation who's encoding is larger than the largest send window the
receiver is willing to allow.


On Thu, May 29, 2014 at 1:27 PM, Johnny Graettinger <
jgraettinger@chromium.org> wrote:

> On Thu, May 29, 2014 at 5:52 AM, Greg Wilkins <gregw@intalio.com> wrote:
>
>>
>> I think the simplest solution is to just include all headers in the flow
>> control calculations.
>>
>
>> I believe that the argument against this (ie dead locks) is erroneous
>> because server may still reject important requests if they are resource
>> constrained, so dead lock is not avoided it is just moved to the
>> unknown.    With headers included in flow control, clients would have a
>> solid contract with the server and would know ahead of time if a request
>> can be sent or not.
>>
>> Senders should not commence sending a headers unless there is sufficient
>> window available to send all the frames.
>>
>
>
> This would be my preference also, but:
>
> It significantly complicates encoding. A header block must be encoded to
> determine it's size, and the act of encoding changes encoder state. An
> encoder must have a checkpoint or "undo" mechanism to throw out changes
> from the overflowing representation.
>
> An encoded header block can also be larger than the stream's send window
> (particularly if a server elects to use a small initial window that it
> later ramps up, ala expect 100 continue). How does a sender ask for enough
> window to send the block?
>
> A way to express flow-control commitment without sending the actual bytes
> would, for example, allow an RPC protocol layered on HTTP/2 to negotiate
> flow control by first committing the uncompressed header block size.
>
>
>
>> If we really need to support headers larger that can fit in a single
>> frame, then we could add a flag that set subsequent header sets sent should
>> be aggregated. ie a hpack header set would have to be sent in a single
>> frame (making the ordering of decoding easy), but that subsequent sets
>> could be aggregated to make larger headers sets.  This allows other streams
>> to interleave frames between the aggregated header frames.
>>
>
>
> Let me tweak this slightly; I think your suggestion is equivalent to:
>
> * Requiring that HEADER/PUSH_PROMISE/CONTINUATION frames be broken on
> HPACK opcode boundaries.
> * Flow-controlling these frames under the stream & session.
> * Allowing other stream's frames to interleave between continuations.
>
>
> I think this has the following implications:
>
> The expect-continuation / don't-expect-continuation state machine is moved
> from the session to the stream. This doesn't seem like a big deal.
>
> A sender can always make progress with a non-zero send window. Padding may
> have to be used to completely fill the window (which allows sending
> BLOCKED).
>
> A session's decoder is always in a consistent shared state after a
> HEADER/PUSH_PROMISE/CONTINUATION frame.
>
> A sesson's encoder needs to do more work. For each emitted representation
> (and importantly, *not* for the entire block), an HPACK encoder is
> required to check whether the resulting opcode would overflow the window,
> and to not commit it if so. There are some corner cases (eg, an encoder
> might prefix an representation's opcode with other index opcodes that it
> would evict; example
> <https://code.google.com/p/chromium/codesearch#chromium/src/net/spdy/hpack_encoder.cc&q=hpack_encoder&sq=package:chromium&l=65>).
> Still, I think this is certainly do-able, though it may be a significant
> change for existing implementations.
>
> On balance I think this is an interesting option.
>
> cheers,
> -johnny
>
>
>> On 28 May 2014 23:31, Johnny Graettinger <jgraettinger@chromium.org>
>> wrote:
>>
>>> Looping back to the OP:
>>>
>>> Under the current draft, one way in which peers could effectively
>>> negotiate flow-control for HEADERS is first send empty DATA frame(s) padded
>>> to the HEADERS size. This could be made efficient if DATA frames were able
>>> to express flow-control commitment beyond the wire size of the frame. Is
>>> there interest in this?
>>>
>>> There are lots of ways this could be conveyed, but the least disruptive
>>> may be as a DATA frame with padding larger than the frame size.
>>>
>>>
>>> On Wed, May 28, 2014 at 10:08 AM, Martin Thomson <
>>> martin.thomson@gmail.com> wrote:
>>>
>>>> On 28 May 2014 09:35, Greg Wilkins <gregw@intalio.com> wrote:
>>>> > If the resource constrained server does not have the resources to
>>>> accept the
>>>> > 250B extra header, it can RST_STREAM, but it still has to process the
>>>> > headers, because of the shared state table.   So if the server really
>>>> is
>>>> > resource constrained, and wants to limit the resources of each
>>>> connection,
>>>> > then it wont just RST_STREAM, it will GO_AWAY the whole connection -
>>>> and all
>>>> > the work in progress on all the other streams will be lost!
>>>>
>>>> Yes, if you can't tolerate the work that updating the header table
>>>> requires, then I suspect that you might find you are best dropping
>>>> connections.
>>>>
>>>> I don't see any intrinsic problem with this.  We've delegated the
>>>> state commitment management to the HTTP layer: the 431 status code,
>>>> specifically.  That makes more sense to me, since header processing is
>>>> a function of that layer.  RST_STREAM remains as a secondary option.
>>>> GOAWAY as a measure of last resort.
>>>>
>>>>
>>>
>>
>>
>> --
>> Greg Wilkins <gregw@intalio.com>
>> http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that
>> scales
>> http://www.webtide.com  advice and support for jetty and cometd.
>>
>
>

Received on Thursday, 29 May 2014 17:37:03 UTC