Re: Proposal: 100-Continue optional under Client control

On Fri, 4 Jul 1997, Koen Holtman wrote:

> David W. Morris:
> >
> [...]
> >
> >I have just reviewed RFC 2068 and find no indication that 100 (Continue)
> >is a hop-hop mechanism.
> 
> Maybe we mean something different when we say hop-by-hop.  
> 
> What I meant is that the message transmission requirements and binary
> exponentiual backoff happen between *client* and *server* (these are
> the words the spec uses everywhere), not between *user agent* and
> *origin server*.
> 
> In a chain of clients relaying a request, it would be up to each
> individual client to decide whether to wait for a 100.


We use the term server when we don't choose to differentiate between
proxies and servers. The proposed Expected: header is not a hop-hop
header as described in the RFC nor is the 100 Continue a hop-hop
mechanism.

For example,  section 10.1.1 describing 100 Continue states:
   The server
   MUST send a final response after the request has been completed.
The clear implication to me is:
   - The origin server MUST send the final response
   - Any proxies must forward the final response

Section 13.11 says:
    This does not prevent
    a cache from sending a 100 (Continue) response before the inbound
    server has replied.
Which is a permission for the proxy to send the 100 Continue but to me
quite clear that the origin server is the real intended target of the
mechanism. The proxy is not required to send the 100 continue.

In much of the recent discussion which resulted in my proposal, it seemed
to be that the contributors were clearly thinking in terms of
the 100 Continue mechanism as a pacing control between the client and
the origin server.

Hence, I think that if that isn't the intent of the HTTP/1.1 protocol,
we have a bigger issue than has surfaced previously.

Dave Morris

Received on Friday, 4 July 1997 13:14:12 UTC