W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: Our Schedule

From: Greg Wilkins <gregw@intalio.com>
Date: Mon, 26 May 2014 13:35:22 +0200
Message-ID: <CAH_y2NGbb==3RfVB=t5dPfj0F+-9yhqUULECtWtwRof9XeH3_A@mail.gmail.com>
To: Mark Nottingham <mnot@mnot.net>
Cc: HTTP Working Group <ietf-http-wg@w3.org>
Mark,

I accept that some of my comments and phrasing are not exactly
constructive.  But they are intended to be helpful.    My impression is
that the WG has been dealing with things, issue by issue and that in
isolation each decision makes sense, but when taken together the sequence
of individual issue resolutions have lead the protocol architecture away
for some good norms and new issues appear to keep coming.

My post was not intended to expand on any particular point of concern, but
rather to point out that I see many significant concerns all over the
protocol, that I believe have a common root in some architectural problems
such as the decision to break layering in order to support hpack.
I need to evaluate how I can best contribute to the process and I'm not
convinced that engaging in an issue by issue detailed debate will be
constructive if there is significant architectural review required.   The
risk is that you become too busy fighting crocodiles and forget that we
came here to drain the swamp!

Hence I took the opportunity of this schedule thread to question the
resolve of the WG to push through to LC and RFC on the basis of the current
draft.    If it is the case that an RFC based largely on draft 12 is
possible by the end of the year, then my best start fighting crocodiles
(and more keep coming out of the swamp).   But if there still is in some
chance of significant architectural change, then my best contribution may
be to point out that the presence of a large numbers of wildlife may
indicate some systematic problems - which is what I'm attempting to do.

So that it the main point I wish to make in this response... but I will
address some individual crocodiles point by point below (so reading on is
probably neither constructive nor helpful in the context of this thread).



On 26 May 2014 07:10, Mark Nottingham <mnot@mnot.net> wrote:

> Hi Greg,
>
> On 24 May 2014, at 7:43 pm, Greg Wilkins <gregw@intalio.com> wrote:Thanks
> for making a concrete proposal here. I’ve created <
> https://github.com/http2/http2-spec/issues/484> to track this.
>

I'll try to expand on the state diagram in the next few days with a more
compact version and some supporting text (however I think that it is a good
diagram in as much as it does not require very much supportive text -
closed actually means closed!).


>
> HPACK has been implemented interoperably by many. It’s also been reviewed
> for security issues, and so far the risk has been felt to be acceptable.
>
>
Another of my concerns with HPACK is that the encoder has many strategic
choices to make that will affect both efficiency and security.
Interoperability testing is not sufficient to test if correct strategic
decisions have been made nor if the implementations are robust under a wide
range of encoding strategies.

If you have a concrete issue regarding HPACK security or its operation,
> please bring it up; making sweeping, predictive statements doesn’t really
> move us forward.
>

See above for my justification for sweeping statements that are intended to
be helpful.  But other concerns I have for HPACK include that a single per
connection reference set is both an issue for parallel slowdown, but also
insufficient to well encode connections that aggregate streams from
different sources.

I do see that HPACK can be very efficient, but I have not yet found the
justification for why HTTP/2 needs to have such aggressive compression.
It is very easy to make some significant gains over HTTP/1.1, so going for
a less invasive compression would seam a more prudent step.   My experience
from SPDY is that we are going to get most of the gains from multiplexing,
reduced round trips and from push.   Such an aggressive compression
algorithm does not seam worth the risk of reducing the take up of those
other good attributes (dang - making sweeping predictive statements again).


>
> >       • Users of the protocol, are able to send data as headers, which
> is unconstrained by size, flow control or segmentation.   Client-side and
> Server-side applications are bound to collude to exploit this to obtain an
> unfair share of any shared HTTP/2 connections/infrastructure, which will
> eventually break the fundamental paradigm around which the protocol is
> based and eventually force the use of multiple connections again.
>
> Can you explain this a bit more, perhaps with an example?
>
>
A core aspect of HTTP/2 is about fairly sharing a single HTTP connection.
In the initial usage, this is envisioned to be shared by streams from the
one web application, but there will also be wider sharing: between tabs on
a browser talking to the same service; between streams from different
clients aggregated onto a connection by an intermediary.   But data sent as
headers rather than in data frames is not subject to the fair share
mechanisms.  It is not flow controlled, nor can it be segmented and
interleaved with data from other streams.

Thus if there is contention for shared connections, then a huge incentive
exists for to utilise headers for data transfer rather than data
frames.      Initially application access to this is somewhat limited by
the browsers and to some extend by server header limits.    But we have
already seen that browsers are prepared to abuse the RFC to gain
performance advantages over competitors (see the connection limit arms
race) and browser/server venders are not the only gatekeepers for access to
shared infrastructure. If the incentive exists, then it is possible and
likely developers will create clients and servers that will collude to send
their applications data via headers rather than by data frames, so that
they may gain an unfair share of any common connections.

Simply put, one user may grab exclusive use of the entire HTTP/2 connection
capacity by sending a HEADERS frame an then continuing to stream data in
headers encoded as CONTINUATION frames forever.



> Again, if you have a proposal for layering, we can consider that.
>

I'm thinking that is probably the best way forward for me.  I'll see if I
can prepare a version of the draft that removes the worst of the layering
breaches and see if there is any support for resetting to that.  Perhaps
that effort will convince me that we are on the right path?

Sorry for being non-constructive and provoking a wider meta-discussion.
But I am trying to be helpful in the least disruptive way possible and
think that at the point of first call for the last call is a good time to
take a moment to re-evaluate the big picture.

cheers

-- 
Greg Wilkins <gregw@intalio.com>
http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that scales
http://www.webtide.com  advice and support for jetty and cometd.
Received on Monday, 26 May 2014 11:35:52 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:26 UTC