W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > January to April 1997

Re: 1xx Clarification

From: John Franks <john@math.nwu.edu>
Date: Mon, 21 Apr 1997 11:35:52 -0500 (CDT)
To: Dave Kristol <dmk@bell-labs.com>
Cc: mogul@pa.dec.com, http-wg@cuckoo.hpl.hp.com
Message-Id: <Pine.SUN.3.95.970421112036.10420A-100000@hopf.math.nwu.edu>
X-Mailing-List: <http-wg@cuckoo.hpl.hp.com> archive/latest/3106
On Mon, 21 Apr 1997, Dave Kristol wrote:

> Jeff Mogul wrote:
> > P.S.: Or we should eliminate "100 Continue" and the two-phase
> > mechanism entirely.  We added it because Roy (and maybe a few
> > other server implementors?) wanted to be able to reject long
> > requests without reading the whole message.  But it clearly
> > has led to a lot of complexity for all implementors (clients,
> > servers, and proxies) and introduces some unavoidable overheads.
> > My own feeling is that it is probably not such a big deal for
> > a server to have to bit-bucket a large request body once in
> > a while.  Especially compared to the added complexity that
> > the two-phase model implies.
> I think the two-phase commit increases the odds that implementors of HTTP/1.1
> clients and servers will "get it wrong", leading to poor interoperation.  The
> reason is that time-dependent problems are much harder to test and find.
> Jeff's suggestion of a bit-bucket is pretty easy to implement, and clients
> (which don't require any special coding for this) and servers are both likely
> to get it right.
> I believe the argument against a bit-bucket is that the server has to waste
> resources to consume the incoming bits, and network bandwidth gets wasted at a
> time when we're trying to reduce HTTP-induced network bandwidth.  It's hard to
> know how much of a problem either of these *really* is.  Does anyone have
> numbers for how often servers reject PUT/POST because they can't accept the
> content?  My guess is it's not a big problem yet.  Can we afford to defer the
> solution until it is? 
> Dave Kristol

I haven't been following the "100 Continue" discussion, but today I
read all the relevant sections of the spec and I must say I am 
confused.  Here are a few of my questions:

All questions apply to a transaction between a 1.1 origin server
and a 1.1 client.

1) Is it legal for the server to never send "100 Continue" and instead
just close the connection for requests it does not wish to grant?
Since the server can always close the connection whenever it wants,
the question is will the client hang indefinitely waiting for a "100
continue" or error message?  An old version of the spec had a 5 second
pause, but now that is gone.

2) Is it the intent that *every* client request with a body should
send headers and then wait for a "100 coninue" or error?  The spec
does not explicitly say which transactions use this "two-phase"
procedure.  It only says, 

  "Upon receiving a method subject to these requirements from an
  HTTP/1.1 (or later) client, an HTTP/1.1 (or later) server MUST either
  respond with 100 (Continue) status and continue to read from the input
  stream, or respond with an error status."

Which transactions are "subject to these requirements"?  Is it any
request with a body?  If so, the spec should say this.

3)  Does a client only wait for a "100 continue" on a retry?  If so
does the server always send the "100 continue" or just on retries?
If the latter how does the server know it is a retry?

John Franks 	Dept of Math. Northwestern University
Received on Monday, 21 April 1997 09:46:34 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:40:19 UTC