W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2015

Re: Last Call: <draft-ietf-httpbis-http2-16.txt> (Hypertext Transfer Protocol version 2) to Proposed Standard

From: Martin Thomson <martin.thomson@gmail.com>
Date: Tue, 6 Jan 2015 10:23:44 -0800
Message-ID: <CABkgnnV7zrugtG06hU32TPV-MDyvXd=yB1jBSTccAMJeYwdedA@mail.gmail.com>
To: Stefan Eissing <stefan.eissing@greenbytes.de>
Cc: HTTP Working Group <ietf-http-wg@w3.org>
Thanks for the review Stefan.

As is customary with reviews of documents at this stage, you have
identified issues that fall into the following categories:

1. Things that were controversial, and therefore resulted in what is
probably an awkward compromise.

2. Things that were considered part of assumed knowledge, so didn't
end up getting enough context.

3. Genuine errors, which are often as a result of bad assumptions (see 2).

On 6 January 2015 at 05:13, Stefan Eissing <stefan.eissing@greenbytes.de> wrote:
> 1. "hop-by-hop" vs. "end-to-end" references
> At two locations in the spec (5.2.1 Flow Control, 6.9 WINDOW_UPDATE) the distinction between "end-to-end" vs. "hop-by-hop" is being made without explaining how h2/h2c would work with an intermediate. Maybe that is left undefined intentionally. But if a HTTP2 intermediate is a forward or reverse (CDN) proxy, I think it must always
> - renumber stream identifiers
> - uncompress HEADERs from the upstream and recompress for downstream
> - consider carefully the mapping of stream and connection errors
> So in what way is it meaningful to see HTTP2, the thing carrying HTTP requests/responses, as more than hop-by-hop?

Fundamentally, HTTP/2 (and HTTP/1.1 part 1, which it replaces in part)
is a description of a hop-by-hop component of HTTP.  It is perfectly
valid for an end-to-end exchange to have hops on different protocol
versions.  Thus, an intermediary needs to do those things you
describe, as well as:
 - consider what to do with server pushes (allow, suppress, pass on,
generate new, etc...)
 - prioritize streams (especially if streams from multiple connections
are being multiplexed in either direction)
 - manage flow control windows

I could speculate on why advice on these responsibilities never made
it into the spec, but honestly, it is probably better that we don't
include what could only be partial guidance.  At best, we could
enumerate the protocol features that need special consideration at an
intermediary.  I suspect that that list would be incomplete and
imperfect anyway (there are a lot of cases where a particular decision
about X means that you need to then address Y).


> 2. PUSH_PROMISE and header completeness
> How are cache sensitive headers like "Authorization" handled in PUSH_PROMISE request headers? If simply left out, the client could store pushed resources in a cache where they do not belong. Should servers use extra Cache-Control directives in such pushed responses?

This is part of the same sort of advice that your first question
touches on: how do you effectively implement feature X.  Here, the
rules are pretty easy to infer.  If you have a resource that would
send Vary for a header field that the server cannot produce on its own
then server push probably won't work.  Authorization is a great
example.  The server might be able to produce *some* Authorization
headers, particularly if one was in the request that the push is
associated with though.

> Related: how is the role of the ACCEPT-* request headers? Should a client reject PUSH_PROMISE with request headers that does not match its own ACCEPT-* preferences? What if it uses ACCEPT-* and the PUSH_PROMISE is lacking those? Or the other way around?

The same applies here.  A client uses the rules in Section 4 of RFC
7234 to determine whether a cached response can be used to satisfy its
request.  If a client sends Accept: x/y and a server has pushed a
response that included Vary: Accept and a content type that didn't
match the clients preferred Accept header field, then the client
should probably make a request rather than use the push.


> 3. Clarification on server-initiated push?
> In discussions with colleagues some had the notion that HTTP2 would allow server initiated "requests". My reading of the draft is that this is not really the case. Server pushes are only defined for streams opened by the client.
> - Is this the correct reading of the spec?
> - If yes, has HTTP2 any advise how to best do long polling or what is the recommended alternative?

Patrick has addressed that.  If you want a worked example of
"long-polling" in HTTP/2, see
https://martinthomson.github.io/drafts/draft-thomson-webpush-http2.html


> 4. SETTINGS_MAX_HEADER_LIST_SIZE as advisory
> It seems undefined what a client (library) should do with it. Will this not give rise to interop problems if one client respects it and fails requests immediately while another does no checks and sends them anyway? MUST a peer that announces a limit always reply with 431 to requests that exceed it?

Yes, this is a little nebulous, but intentionally.  If you consider an
end-to-end protocol with multiple hops, the value that is actually
enforced is the lowest value from all of the servers in the path of a
request.  Since each request might follow different paths, the best
that *this* protocol can do is provide information on the value
enforced by the next hop (who knows if the next hop is even HTTP/2).

The server is not required to send 431 if the size exceeds this: maybe
some resources can handle streamed header fields, maybe some resources
are forwarded to different origin servers.

If you can't think of a concrete action to take based on this setting,
I would ignore it.


> 5. Graceful connection shutdown
> GOAWAY seems to serve two purposes:
> - inform peer of highest process stream before closing, so that retries can be done safely
> - initiate a graceful connection shutdown
> What is expected of a client upon receiving a GOAWAY with Last-Stream-Id 2^31-1, e.g. graceful shutdown?
> - it must no longer create new streams
> - it should expect a future GOAWAY with a lower stream id
> - it can expect responses on half-closed streams?
> - it should RST all incoming PUSH_PROMISEs?
> (And of course, a connection can always simply break at any time...)

I don't think that clients should expect a GOAWAY with a lower stream
identifier.  It is valid for a server to say: if you sent me a
request, I might have done something, so you can't retry, ever.  It's
bad manners, but still valid.

Servers might send another GOAWAY, at which point clients might revise
their view of which streams can be retried.

Clients that see a GOAWAY do need to stop making new streams.  But if
the connection isn't broken, they should continue to look for
responses to the requests that they made (up to the stream ID the
server included in the GOAWAY).  That includes server push.  Clients
MAY reset pushed streams, of course, but they aren't required to do
so.

And yes, the connection might break at any time.  At which point, you
are stuck with the last value for last-stream-id that you've seen,
which implicitly starts at 2^31-1.


> 6. Optimisation: Default SETTINGS values same for clients and servers?
> I would think that clients have different preferences for defaults than servers. Especially for INITIAL_WINDOW_SIZE. Since differences to defaults need to be send for every HTTP2 connections, are the current values good?

I believe that these values have all been vetted in that way.  They
aren't universally good, of course, but the cost of changing them is
trivial; I don't think that anyone has suggested that saving that tiny
number of bytes was going to make a substantial difference to them.
On the contrary, I believe that Firefox sends some settings with
default values anyway.

> Related: if a peer sends/receives an empty SETTINGS at connection start, do acknowledgements serve any purpose?

Yes, to ensure that there is consistent handling of the message.
Nothing more than that.  Spending the bytes isn't a big concern (and
for this one, the cost can be deferred if there are more important
things to send, though 9 bytes doesn't really hurt that much).


> 7. Opinion: Chapter 9.1 Limitation to single connection
> Have we not been here before? In the past, such SHOULD NOTs have not been very helpful. (Most likely already discussed heavily on the list...no strong feelings about this. It will be ignored anyway if not proving useful.)

Indeed we have.  And when HTTP/1.1 advised a limit of 2 connections,
that was done despite the fact that it was introducing a real
limitation on the usability of the protocol.  2 wasn't a special
number, it was just something someone made up (as is 6); 1 is one of
the three important numbers, not an arbitrary choice.  Here, we have
no such limitation because there is no corresponding limit on
concurrency within the protocol.  There are still transport-level
limitations (INIT_CWND, for instance), but I think some people expect
to be lifting those over time.  I think that the working group is
firmly intent in making this one stick.

--Martin
Received on Tuesday, 6 January 2015 18:24:11 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:42 UTC