Re: Benjamin Kaduk's Discuss on draft-ietf-httpbis-http2bis-06: (with DISCUSS and COMMENT)

For your comments, I have a few responses inline.  I've deleted those that I more or less directly accepted.  There were some good ones there, but I see no need to add to the size of this email.  See the pull request for details:

On Thu, Jan 6, 2022, at 04:43, Benjamin Kaduk via Datatracker wrote:
>    *  The ranges of codepoints for settings and frame types that were
>       reserved for "Experimental Use" are now available for general use.

Good catch.  That was lost in a refactoring of the IANA Considerations section.  That was in error.

> There are a number of places in the section-by-section comments where I
> compare this document to the text in the quic-http spec; those comparisons
> were made mostly in vacuum, and can safely be ignored if there are known
> reasons for divergence between h/2 and h/3.
> Section 4.1
> I recall that quic-http has a note on the generic frame format that each
> frame's payload must contain exactly the fields listed, with no additional
> bytes and not terminating before the end of the listed fields, and that
> redundant length-encodings must be verified for consistency.  While I don't
> actually see any redundant length encodings in the frame types specified in
> this document (but maybe there is some redundancy when HPACK is added into
> the mix?), the other cautions from h/3 might be worth mentioning here as
> well.

HTTP/2 has no redundant lengths other than the length field itself which is often redundant.  We have adequate text but it is on a per-frame basis.

HTTP/3 is allowed to be better here.   I think we're OK without a change.

> Section 5.1
>    half-closed (local):  A stream that is in the "half-closed (local)"
>       [...]
>       An endpoint can receive any type of frame in this state.
>       [...]
>       PRIORITY frames can be received in this state.
> I'm not sure whether this redundancy is adding much.  In RFC 7540 the last
> sentence was followed by a note about "used to reprioritize streams that
> depend on the identified stream", but since PRIORITY is basically neutered
> now it may not need a special mention.

I would prefer to keep this as it is a common error.  It's easy to forget to allow this, especially now that PRIORITY is deprecated.

>    closed:  The "closed" state is the terminal state.
>    [...]
>       An endpoint that sends a frame with the END_STREAM flag set or a
>       RST_STREAM frame might receive a WINDOW_UPDATE or RST_STREAM frame
>       from its peer in the time before the peer receives and processes
>       the frame that closes the stream.
> Could such a client also receive PUSH_PROMISE frames in this situation?
> The following paragraph (about the "open"->"closed" via sending RST_STREAM
> transition) does mention receiving any frame (and PUSH_PROMISE specifically)
> and the need to update compression state, but if I understand correctly
> PUSH_PROMISE can happen after a "half-closed (local)"->"closed" transition
> due to sending RST_STREAM as well, which does not seem covered by the
> existing text.

The next paragraph covers that case, but neglected to mention "half-closed (local)" in that, which it should have.  I think that covers your other points about this.

> Section 5.1.2
>    An endpoint that wishes to reduce the value of
>    SETTINGS_MAX_CONCURRENT_STREAMS to a value that is below the current
>    number of open streams can either close streams that exceed the new
>    value or allow streams to complete.
> Is there a race condition here between the peer continuously creating new
> connections and the endpoint closing connections to try to lower the value?
> In particular, what happens if the SETTINGS that reduces the value crosses
> on the wire with the frame creating a stream that would exceed the value?

The new limit only applies after the peer acknowledges the change, so as long as they are within the limit that is currently acknowledged, they are not in violation of the protocol.  This is really talking about what to do about those that are nominally within the limit, but in excess of what the endpoint really wants.

> Section 8.4.1
>    The header fields in PUSH_PROMISE and any subsequent CONTINUATION
>    frames MUST be a valid and complete set of request header fields
>    (Section 8.3.1).  The server MUST include a method in the :method
>    pseudo-header field that is safe and cacheable.  If a client receives
>    a PUSH_PROMISE that does not include a complete and valid set of
>    header fields or the :method pseudo-header field identifies a method
>    that is not safe, it MUST respond with a stream error (Section 5.4.2)
>    of type PROTOCOL_ERROR.
> Up in §8.4 we seemed to talk about the same (non-safe) scenario by saying
> that the promised stream being reset with the PROTOCOL_ERROR.  But I read
> this text ("respond with a stream error") as applying to the explicit
> request to which the promised request is associated.  Is that the intent?
> If not, perhaps we should say "respond on the promised stream ID".

Good question.  The error exists with the promise, not the request, so I think the latter.

> Section 9.1
>    Clients SHOULD NOT open more than one HTTP/2 connection to a given
>    host and port pair, where the host is derived from a URI, a selected
>    alternative service [ALT-SVC], or a configured proxy.
> quic-http has similar text (in §3.3), but it refers to a given IP address
> and port, rather than host and port.  Is the difference between host and IP
> address significant when comparing h/2 and h/3?  (When using IP addresses,
> we of course have to additionally talk about name resolution of the other
> types of identifier.)

I honestly don't know.  I think perhaps host is better in this case in the sense that clients aim to connect to hosts and connection coalescing is not a requirement, just permitted (as noted in the text that follows).  I'm not sure that it really matters ultimately, but it's worth checking.

Perhaps Mike Bishop can help us here.

> Section 10.5.2
>    The CONNECT method can be used to create disproportionate load on a
>    proxy, since stream creation is relatively inexpensive when compared
>    to the creation and maintenance of a TCP connection.  A proxy might
>    also maintain some resources for a TCP connection beyond the closing
>    of the stream that carries the CONNECT request, since the outgoing
>    TCP connection remains in the TIME_WAIT state.  Therefore, a proxy
>    cannot rely on SETTINGS_MAX_CONCURRENT_STREAMS alone to limit the
>    resources consumed by CONNECT requests.
> Looking at the diff from this text to §10.5.2 of quic-http, I see the latter
> has a new sentence about "a proxy that supports CONNECT might be more
> conservative in the number of simultaneous requests it accepts", and also
> has some different phrasing in the last sentences about "might delay
> increasing the stream limit after a TCP connection terminates" vs "cannot
> rely on SETTINGS_MAX_CONCURRENT_STREAMS alone".  I think that the overall
> issues in this space remain pretty analogous between h/3 and h/2, so we
> might want to pull in more of the updated h/3 text here.

The mitigations available in QUIC are not viable for HTTP/2, so I think we're OK with that difference.  (Again HTTP/3 is allowed to be better.)

> Separately, do we want to mention [TALKING] again here and the issues it
> raises?

I don't think that those are specific to CONNECT or uniquely worse with CONNECT.

Received on Friday, 7 January 2022 04:14:19 UTC