Re: [Fwd: I-D ACTION:draft-decroy-http-progress-00.txt]

I confess that I have not had (and probably won't have) the time
to read this thread very carefully.

But I should point out that the "100-continue" mechanism, and
especially how different versions of that spec interoperate, has
been fairly carefully analyzed in a conference paper, and it
might be a good idea to read that paper before taking any new steps.

Since Alex Rousskov actually pointed this out here 4.5 years ago,

   <http://lists.w3.org/Archives/Public/ietf-http-wg/2002JulSep/0032.html>

I will just quote from his message:

   FYI: There was an interesting paper presented at recent Web Caching
   Workshop[1]: "Safe Composition of Web Communication Protocols for
   Extensible Edge Services" by Adam D. Bradley, Azer Bestavros, and
   Assaf J. Kfoury (Boston University). The title is scary but does not
   say much about the study. You can get a PDF version of the paper [2].

[In fact, you would never guess from the title that it analyzes this
aspect of HTTP! --Jeff]

   The authors used a formal model to find problems with
   Expect/100-continue implementations. If their results are correct,
   there are a few more-or-less realistic cases where a combination of
   compliant and semi-compliant HTTP devices may lead to a communication
   deadlocks or other bad things.

   [1] http://2002.iwcw.org/
   [2] http://2002.iwcw.org/papers/18500001.pdf

As I recall, the paper concludes that RFC2616 more or less gets
the design right, whereas the design in RFC2068 was flawed, and
even so, there are "legal" ways of implementing RFC2616 that could
lead to deadlocks.  In mixed environment (2068 + 2616 implementations
trying to interoperate), deadlock is possible.  (So I would be
very careful about trying to design a new protocol mechanism that is
intended to interoperate with existing RFC2616 implementations.)

The main point of this paper, however, is that it's nearly
impossible to get this kind of protocol "right" (with respect
to certain desirable criteria, such as liveness) unless you
do a formal analysis.  Thinking hard about a few example cases
is a good start, but it doesn't necessarily avoid the bad cases
you hadn't thought of.

My guess is that we were able to get the RFC2616 version "almost
right" mostly because we were lucky.  And even though I had a
lot to do with the final design, I no longer remember clearly how
it works or how we got there.  (And sorry, I certainly couldn't
do this kind of formal analysis myself.  Maybe some grad student
is looking for a project.)

Also: this thread has touched briefly on how proxies should
deal with HTTP version numbers.  Please see RFC2145 -
"Use and Interpretation of HTTP Version Numbers."  It's not
just a good idea, it's the law.

-Jeff

Received on Thursday, 15 February 2007 01:13:00 UTC