Re: Overhead of HTTP/2 Stream Management.

I wrote a server implementation fully except no HPACK yet(working on now).
As for a client impl, I tested using a header modifier plugin for Firefox,
but for a full on browser/extension would be out of my skill set. As for
submitting a draft, I emailed Mark, and don't know how to submit one myself.

On Sun, Apr 5, 2015 at 4:50 PM, Roberto Peon <grmocg@gmail.com> wrote:

> If that is true, then you may want to write up a server and client
> implementation, test/deploy it in the real world, and the write up.submit a
> draft.
> -=R
>
> On Sun, Apr 5, 2015 at 4:37 PM, Max Bruce <max.bruce12@gmail.com> wrote:
>
>> Well, I proposed a HTTP/1.1 addition to Mark, that prevents HOL blocking
>> & allows server pushing in HTTP/1.1, with 100% backwards compatibility, and
>> relative ease to implement, and can even start HPACK without any direct
>> protocol negotiations. TCP is in charge of ensuring that content gets where
>> it needs to go, if that TCP is untimely closed, that's something the client
>> just has to re-request. As for the connection/stream overhead, you can have
>> only one case of initializing TCP, and zero stream overhead, using 1-2 HTTP
>> headers, that allow server pushing and prevent HOL blocking if implemented
>> on both client and server(and if not, prepare the responses for the client
>> before they ask).
>>
>> On Sun, Apr 5, 2015 at 4:15 PM, Greg Wilkins <gregw@intalio.com> wrote:
>>
>>>
>>> Max,
>>>
>>> I don't see much difference between the stream overheads of HTTP/2 vs
>>> the connection overheads of HTTP/1.
>>>
>>> Both need open/close state kept and even in HTTP/1 that state is
>>> moderately complex as you can be half closed in and/or half closed out; the
>>> response can complete before the request; there are multiple sources of
>>> events (application vs network) that can race on state changes; the server
>>> has a requirement to reliably deliver a serialised event stream to the
>>> application without duplicates or loopbacks.    Unless the server/client
>>> keeps good atomic state on the open/closed status, then there are going to
>>> be lost events and/or leaked resources.
>>>
>>> In jetty the vast majority of this state overhead is in common code used
>>> by both HTTP/1 and HTTP/2.    This code used to be a lot simpler in older
>>> versions of our HTTP/1 on server... but it was wrong code that missed many
>>> edge cases when presented with fully asynchronous applications.   Closing
>>> asynchronous streams is just complex and multiplexing changes that very
>>> little. It just makes the network event stream look a little different and
>>> some events are distributed to multiple listeners.
>>>
>>> cheers
>>>
>>>
>>> On 6 April 2015 at 06:08, Max Bruce <max.bruce12@gmail.com> wrote:
>>>
>>>> The way HTTP works though, we don't need streams in such a conventional
>>>> and TCP-like way. We only need multiplexed packets to carry data over, so
>>>> just associate request/response pairs with an ID, and allow server push via
>>>> server sending the request path in a header too. Why do we even need a
>>>> frame structure? It's unnecessary overhead. Same with the virtual streams.
>>>>
>>>> On Sun, Apr 5, 2015 at 11:57 AM, Willy Tarreau <w@1wt.eu> wrote:
>>>>
>>>>> On Sun, Apr 05, 2015 at 11:45:53AM -0700, Max Bruce wrote:
>>>>> > My thoughts is that you just don't use so much overhead. You don't
>>>>> get rid
>>>>> > of stream IDs, you just don't need so much complex things
>>>>> surrounding it.
>>>>> > Example: You append a header to HTTP/1.1 request, with a response ID,
>>>>> > server responds with it. Server can push responses by sending a
>>>>> unsent ID &
>>>>> > request path in a header.
>>>>>
>>>>> You still need the stream IDs in the frames themselves so that you know
>>>>> which stream each frame belongs to.
>>>>>
>>>>> Multiplexed systems always look simple at first, until you start to
>>>>> implement them, cover the corner cases (eg: who closes first etc) and
>>>>> you finally realize once everything is done how much your system looks
>>>>> like tcp...
>>>>>
>>>>> There was an elegant (in my opinion) simplification in H/2 compared to
>>>>> other systems, the stream IDs are always incremented until the largest
>>>>> encodable ID is reached, which is where a new connection must be used.
>>>>> I find this elegant because you don't need to keep track of IDs in use
>>>>> vs available ones and it really simplifies a number of things (eg: no
>>>>> risk to have late frames from an old stream using the same ID).
>>>>>
>>>>> It doesn't please me either to have to implement such a complex system
>>>>> but I am absolutely convinced that it can hardly be simplified further
>>>>> as long as we want non-blocking, multiplexed streams. I have already
>>>>> implemented multiplexed streams in the past for some projects, and it
>>>>> resulted in almost the same design (but more complex).
>>>>>
>>>>> Willy
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> Greg Wilkins <gregw@intalio.com>  @  Webtide - *an Intalio subsidiary*
>>> http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that
>>> scales
>>> http://www.webtide.com  advice and support for jetty and cometd.
>>>
>>
>>
>

Received on Sunday, 5 April 2015 23:57:06 UTC