W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: Minor points in draft 12

From: Martin Thomson <martin.thomson@gmail.com>
Date: Mon, 19 May 2014 09:38:44 -0700
Message-ID: <CABkgnnWMV9o_t572tc8Ghqzn3OcFZQs8v=W=hGJw0FR62xM7uA@mail.gmail.com>
To: "Richard Wheeldon (rwheeldo)" <rwheeldo@cisco.com>
Cc: HTTP Working Group <ietf-http-wg@w3.org>
Hi Richard,

Sorry about taking so long to get around to this.

https://github.com/http2/http2-spec/commit/77c47588c3180f24487a623c8e5b4d90853fe420

On 2 May 2014 23:47, Richard Wheeldon (rwheeldo) <rwheeldo@cisco.com> wrote:
> "Pad High:  Padding size high bits.  This field is only present if the PAD_HIGH flag is set."
> "Pad Low:  Padding size low bits.  This field is only present if the PAD_LOW flag is set."
> "E: A single bit flag indicates that the stream dependency is exclusive, see Section 5.3.  This field is optional and is only present if the PRIORITY flag is set."
> It's a bit ambiguous just from the text if the fields are there but set to all zero or missing completely from the byte stream? If the bytes were missing it would make no sense for a single bit to be so.

The bytes are omitted entirely.  I'm not sure how to make this any
clearer.  If you have any suggestions, that would help.

> "The client makes an HTTP/1.1 request that includes an Upgrade header field identifying HTTP/2 with the "h2c" token."
> What should happen if the client sends an "h2" upgrade? Ignore or error?

That's really a matter for HTTPbis p1, but I've added text requiring
that servers ignore "h2" in upgrade and "h2c" in ALPN.

Because I think that this needs to be discussed, here's the text I've added:

          A server SHOULD ignore a "h2" token in an Upgrade header
field.  Presence of a token with
          "h2" implies HTTP/2 over TLS, which is instead negotiated as
described in <xref
          target="discover-https"/>.
and
          HTTP/2 over TLS uses the "h2" application token.  The "h2c"
token MUST NOT be used.

I don't think we can totally prohibit the use of "h2" in Upgrade,
though we need to be pretty strong.

> "The maximum concurrent streams setting is specific to each endpoint and applies only to the peer that receives the setting."
> I'm assuming this is a hop-by-hop setting which an intermediary can change but it's not 100% clear from the text. It makes sense to me to have them independent - e.g. If the browser can support more streams than the server but the proxy can serve more streams by serving content from a cache. Alternatively a proxy could support a higher degree of concurrency to the server by combining requests from multiple clients if those clients are requesting static, PII-free resources from a server (e.g. a CDN).

All settings are hop-by-hop because they pertain to a connection.

> "Endpoints MUST NOT exceed the limit set by their peer."
> Presumably it would be an error if they did? I'm guessing PROTOCOL_ERROR?

I moved the statement a little...

            Endpoints MUST NOT exceed the limit set by their peer.  An
endpoint that receives a
            <x:ref>HEADERS</x:ref> frame that causes their advertised
concurrent stream limit to be
            exceeded MUST treat this as a <xref
target="StreamErrorHandler">stream error</xref>.

Note that stream errors can be turned into connection errors at the
discretion of the implementation.

> "END_SEGMENT (0x2):"
> As discussed elsewhere on the list, the term segment is used before it is defined. Since the doc later talks about "TCP segments" this becomes doubly confusing.

An open issue that we don't really have a good solution for just yet.
Maybe because I've never been particularly enamoured with the idea of
segments.

> "The maximum size of a frame payload varies by frame type."
> It would be good if these were laid out explicitly at some point, rather than simply implied by reading the protocol docs. It might encourage folks to write better validation code earlier.

It is.  Each frame type that stipulates a maximum other than the
global maximum lays it out.

> "DATA frames MAY also contain arbitrary padding."
> Padding with what? Zeroes or random noise? Before compression or after? The former answers would be simpler, the latter probably more secure.

"Padding octets MUST be set to zero when sending and ignored when receiving."

The point about compression is valid though.  It happens after compression.

> "SETTINGS_HEADER_TABLE_SIZE ... The initial value is 4,096 bytes."
> Shouldn't min and max values be defined for this as well?

We talked about that.  Minimum is zero.  Some folks were concerned
that a receiver could effectively mount a denial of service by setting
a maximum to 2^32-1.  But the only thing that does in practice is to
deny access to the static table.

> "Upon receiving a SETTINGS frame with the ACK flag set, the sender of the altered parameters can rely upon their application."
> I'm tempted to think a line mentioning what to do in the interim might be helpful - i.e. to assume the previous values. I can imagine some very subtle and evil bugs introduced if this is not done (e.g. for header compression table sizes).

I think that the implication is strong enough here.  Happy to take
text suggestions, of course.

> "R: A single reserved bit." (under PUSH_PROMISE)
> "The payload of a WINDOW_UPDATE frame is one reserved bit"
> Might be worth adding the usual "always set to zero, always ignore" blurb for completeness.

That's in the lead-in.  I want to avoid unnecessary repetition.

> "A receiver MUST treat the receipt of any other type of frame or a frame on a different stream as a connection error"
> Maybe I'm missing something, but why the restriction on a frame on a different stream? I don't believe there's anywhere else in the protocol that such a restriction is made.

Yes, CONTINUATION is special and its intentional.

> "Recipients of PUSH_PROMISE frames can choose to reject promised streams by returning a RST_STREAM referencing the promised stream identifier back to the sender of the PUSH_PROMISE."
> With what error code? REFUSED_STREAM?

Whatever they like.

> "The PUSH_PROMISE frame modifies the connection state as defined in Section 4.3. A PUSH_PROMISE frame modifies the connection state in two ways. "
> Surely that first sentence is redundant?

Removed.

> "8.1.2. Examples... :method"
> I think it would be better if the concept of these pseudo-headers (currently in 8.1.3) was introduced before they were used in examples.

Yeah, I moved the section down below the definition of header fields.

> "host=example.org"
> I don't understand why this example uses a "host" header when the following one uses ":authority".

That's a mistake.

> "The authority MUST NOT include the deprecated "userinfo" subcomponent for "http" or "https" schemed URIs."
> So what do we do with the userinfo? In particular, I'm thinking of cases for HTTP/2 to HTTP/1.1 or FTP-over-HTTP.

Strip it.  That's what you are required to do today.  It's used to
construct authentication parameters.

> "CONNECT ... Frame types other than DATA or stream management frames (RST_STREAM, WINDOW_UPDATE, and PRIORITY) MUST NOT be sent on a connected stream, and MUST be treated as a stream error (Section 5.4.2) if received."
> What happens if a client receives a HEADER frame? Does it need to be minimally processed (for maintaining compression state)?

Yes.  The normal processing rules apply.

> Plus there are a few typos I noticed:

Got 'em thanks.
Received on Monday, 19 May 2014 16:39:13 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:26 UTC