Re: HTTP/2 extensibility <draft-ietf-httpbis-http2-17>

Hi Bob,

Thanks for your input.  Just two notes, to hopefully provide more
context for this decisions on this point.

On 5 March 2015 at 04:55, Bob Briscoe <bob.briscoe@bt.com> wrote:
> Achieving this milestone on time has been impressive. I understand the
> reasons for having to get something standardised. However, I see potential
> problems. And that would be fine, but only if there were a more granular
> mechanism to extend the protocol to fix it in future.

This was the subject of lengthy discussion, a good part of which you
won't find on the mailing list unfortunately.  However, there was
pretty strong consensus for the model you see before you.

Basically, there is a tension between the desire to be arbitrarily
extensible and the competing desires for both protocol robustness and
actually finishing on time.

Robustness (or robust interoperability) was considered top priority
for HTTP/2, trumping even the concerns you describe, which were
debated at some length.  However, the general principle driving the
design here was to make it very easy to detect when a peer has
diverged from the well-defined core protocol.  Detecting those errors
immediately and failing requests and connections ensures that
implementation errors do not persist long term.  Instead, they get
fixed.

Part of the reason HTTP/1.1 is such a pain to work with is the
numerous places where flexibility is permitted. Variations in HTTP/1.1
implementations have become entrenched, causing new implementations to
be forced into including all the same sort of ugly hacks or risk
interoperability failure.  An interoperable HTTP/1.1 implementation is
MUCH harder to build than it should be.

Many of the implementers of HTTP/2 had experience with the pain that
causes and so we were very careful to identify places where
implementations could deviate (to the extent possible) and
aggressively foreclose on those.

You might like to think of this from another perspective: Postel's
famous statement[1] is not a principle that guides protocol design, it
is in fact a fatalistic observation about the effect of entropic decay
of a protocol.

Finally, designing good extensibility is really, really hard.  It
takes time.  And as the saying goes, shipping is a feature.

> For instance, a number of potential issues around DoS are left open. If the
> protocol has to be hardened against new attacks, I believe the recommended
> extensibility path is to design and implement a completely new protocol
> release, then upgraded endpoints negotiate it during the initial handshake.
> The timescale for such a process is measured in years, during which the
> vulnerability has to be lived with. Surely we need more granular
> extensibility; to introduce new frame types and structures, and/or to
> deprecate outdated/vulnerable ones.

I don't know what others think, but I would expect that a small,
necessary change to the protocol could be deployed by those affected
in much the same timescale as an extension might be.  I do appreciate
the fact that calling something HTTP/3 to fix a small DoS bug might
*seem* drastic, but the only risk is in avoiding people with an axe to
grind trying to get in other changes when you do that.

(I'll also note that your concerns are largely only relevant in the
presence of intermediaries, for what that is worth.)

Received on Thursday, 5 March 2015 17:57:37 UTC