W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > May to August 1997

RE: Is 100-Continue hop-by-hop?

From: Yaron Goland <yarong@microsoft.com>
Date: Wed, 9 Jul 1997 17:21:03 -0700
Message-Id: <11352BDEEB92CF119F3F00805F14F48503187B12@RED-44-MSG.dns.microsoft.com>
To: 'Jeffrey Mogul' <mogul@pa.dec.com>, "David W. Morris" <dwm@xpasc.com>
Cc: http-wg@cuckoo.hpl.hp.com
X-Mailing-List: <http-wg@cuckoo.hpl.hp.com> archive/latest/3707
On 100 being hop by hop, I would also throw in the following scenario
from DAV land:
A client executes a COPY on a container with a large number of members.
The user agent will want to be able to provide update information on how
the copy is progressing rather than just sitting there for a few minutes
while the procedure is underway. 100 continue responses are perfect for
this scenario. The server then sends a series of 100 continues with a
header which provides status information, say percent done or a listing
of successfully copied resources.

If 100s are treated as hop by hop then the information may not get past
the proxy to the client.


> -----Original Message-----
> From:	Jeffrey Mogul [SMTP:mogul@pa.dec.com]
> Sent:	Monday, July 07, 1997 3:35 PM
> To:	David W. Morris
> Cc:	http-wg@cuckoo.hpl.hp.com
> Subject:	Is 100-Continue hop-by-hop?
> Recap: Dave Morris has proposed a new "Expect" request-header
> to declare a client's intention to wait for a 100 (Continue)
> response.
> Koen Holtman remarked that "Expect" therefore was a hop-by-hop
> header, because the 100 (Continue) mechanism was hop-by-hop.
> I initially agreed with Koen.
> Dave replied
>     I have just reviewed RFC 2068 and find no indication that
>     100 (Continue) is a hop-hop mechanism.
> Presumably, we are all basically talking about section 8.2,
> "Message Transmission Requirements."
> Dave writes:
>     We use the term server when we don't choose to differentiate
>     between proxies and servers.
> As one of the primary authors of the text in section 8.2, I can
> assure you that this is exactly what I intended, and almost certainly
> what the other authors of this section intended.  The term
> "origin server" does not appear in section 8.2, and I believe
> our original conception was that this was a hop-by-hop mechanism.
> However, section 8.2 is ambiguous as to whether the 100 (Continue)
> mechanism should be end-to-end or hop-by-hop, when a proxy
> is involved.  I'm not sure we ever really thought this through.
> Note that section 8.2 does say:
>    If an HTTP/1.1 client has not seen an HTTP/1.1 or later response
> from
>    the server, it should assume that the server implements HTTP/1.0 or
>    older and will not use the 100 (Continue) response.
> Since the HTTP version number is most definitely a hop-by-hop
> mechanism (see RFC2145), this strongly implies some sort of hop-by-hop
> behavior for the two-phase mechanism.
> After puzzling over this issue for a while, I've pretty much
> convinced myself that
> 	(1) there are times when the 100 (Continue) mechanism
> 	must be interpreted on each hop, or it simply won't work.
> 	(2) there are times when an end-to-end two-phase
> 	mechanism is useful (maybe even necessary)
> 	(3) "Connection: expect" is unnecesary.
> To elaborate:
> (1) there are times when the 100 (Continue) mechanism must be
> interpreted on each hop, or it simply won't work:
> Consider the case of an HTTP/1.1 user-agent (C1) talking via an
> HTTP/1.1 proxy (P1) to an HTTP/1.0 origin server (S0).  (Here, I use
> "HTTP/1.1" to mean "compliant with our ultimate spec", not necessarily
> "compliant with RFC2068".)
> If the user-agent C1 is expecting an end-to-end 100 (Continue)
> response
> from S0, it's going to be disappointed.  Therefore, proxy P1 cannot
> blindly forward an "Expect: 100-continue" request-header to the origin
> server S0.  It has to figure out whether S0 is going to honor "Expect:
> 100-continue" before it can forward it, or else we'll end up in a
> deadlock (C1 waiting for 100, S0 waiting for the request body).
> We may also find a situation where an HTTP/1.0 client (C0) is talking
> via an HTTP/1.1 proxy (P1) to an HTTP/1.1 origin server (S1), and (for
> whatever reason) the proxy designer wants to use a two-phase mechanism
> (e.g., bandwidth between P1 and S1 is expensive).  In this case, P1
> will be both generating "Expect: 100-continue" locally, and consuming
> "100 (Continue)" responses locally.
> So we would like Proxy P1 to be involved in the two-phase mechanism;
> it
> should not blindly forward "Expect: 100-continue".
> (2) there are times when an end-to-end two-phase mechanism is useful
> (maybe even necessary):
> Dave writes:
>     In much of the recent discussion which resulted in my proposal, it
>     seemed to be that the contributors were clearly thinking in terms
>     of the 100 Continue mechanism as a pacing control between the
>     client and the origin server.
> I.e., a client (e.g., on a slow link) might want to be sure that the
> ultimate origin server will accept the request headers before
> it sends the request body.  E.g., if the Authorization on the
> request could fail.
> So we cannot simply say that the intervening proxy always locally
> generates an immediate 100 Continue response to its client, before
> finding out if the origin server will accept the request headers.
> Dave points out that section 13.11 says:
> 	This does not prevent a cache from sending a 100 (Continue)
> 	response before the inbound server has replied.
> As written, it clearly implies that this is a hop-by-hop mechanism, or
> how else could a proxy cache send the 100 response before it gets it?
> However, it's quite possible that this is a bug in RFC2068
> (and I almost certainly wrote all of 13.11).  It's not clear
> how useful the two-phase mechanism would be if any proxy could
> arbitrarily short-circuit it.
> (3) "Connection: expect" is unnecessary.
> Suppose that the user-agent client does indeed want end-to-end
> behavior
> for the two-phase mechanism.  Let's assume that there is a way to
> specify this (e.g., "Expect: 100-end2end", although we may find that
> this is what "Expect: 100-continue" really means).
> Then, we have these possible situations:
> 	(1) the proxy is HTTP/1.0, and understands neither
> 	"Expect" nor 100.  It will blindly forward the Expect,
> 	will probably forward the "100", and might even drop the
> 	connection before the actual response arrives.  Section
> 	8.2 says to retry in this case, without waiting for 100.
> 	"Connection: expect" is irrelevant, since the proxy
> 	wouldn't obey it anyway.
> 	(2) the proxy complies with RFC2068.  If we insist on
> 	"Connection: expect", then it will not forward the
> 	Expect header, but it won't give any immediate error
> 	status.
> 	    (2A) The next-hop server is HTTP/1.1, and sends
> 	    the 100 response; the proxy forwards its, and
> 	    the user-agent sends the request body.  Success.
> 	    (2B) The next-hop server is HTTP/1.0, and will
> 	    wait for the request body.  Deadlock?
> 	If we did not insist on "Connection: expect", then
> 	the proxy would simply forward the Expect header.
> 	Either way, we would get the same 2A and 2B cases.
> 	(3) the proxy is HTTP/1.1, and understands both Expect
> 	and 100.  Although the "Connection: expect" tells the
> 	proxy to strip the Expect from the incoming message,
> 	the Proxy is certainly allowed to add an Expect header
> 	to the message it sends to the next-hop server.
> 	    (3A) the proxy knows that the next hop server
> 	    is HTTP/1.1, so it can honor the client's expectation,
> 	    and it simply sends along the request (after
> 	    restoring the Expect header).  Success.
> 	    (3B) the proxy knows that the next-hop server is
> 	    HTTP/1.0, and so it should respond to its client
> 	    with a failure status (i.e., the expectation cannot
> 	    be met).  Detected failure.
> 	    (3C) the proxy doesn't know the version number of
> 	    the next-hop server.  If it rejects the request,
> 	    then it might be preventing communication with
> 	    a perfectly good HTTP/1.1 server.  But if it
> 	    forwards the request, we might get a deadlock.
> 	If we did not insist on "Connection: expect", then
> 	the proxy would simply forward the Expect header.
> 	Either way, we would get the same 3A, 3B, and 3C cases.
> 	(But the proxy implementation might be simpler.)
> It might seem reasonable to assume that the client should not be
> sending "Expect: 100-end2end" if it doesn't know that the origin
> server understands the two-phase mechanism.  However, because we
> don't have an explicit end-to-end HTTP version number, I can't
> see a formal way of doing this.  In particular, I can't see any
> way for the *proxy* to know if its client knows whether the
> origin server will send 100 or not.
> However, it does seem reasonable for a user-agent to refrain from
> sending "Expect: 100-end2end" (or "Expect: 100-hopbyhop") to
> a next-hop proxy that it knows is HTTP/1.0, since in that case
> it would be unlikely for this to work.  And the client probably
> isn't going to be doing PUT/POST requests via a proxy it's never
> used before (since it probably has already communicated via that
> proxy to get the relevant HTML form, or whatever).
> It also seems reasonable for a client that is expecting a 100
> response from an origin server to use a relatively short timeout
> on its first attempt to wait for such a response, and if the
> timeout expires, just send the request body without waiting.
> Putting this all together,
> 	(R1) All HTTP/1.1 proxies, and all clients that might
> 	use the two-phase mechanism, ought to keep a cache
> 	storing the HTTP version numbers received from the
> 	servers they have contacted recently.
> 	(R2) HTTP/1.1 user-agents that send "Expect: 100-end2end"
> 	should still use a relatively short timeout before going
> 	ahead with the request body, unless they have already
> 	seen a "100" response from the given origin server.
> 	This avoids deadlock, and means that in case 3C, the
> 	proxy *should* forward the request to a server whose
> 	version it doesn't know (which will be a rare situation,
> 	because of R1).
> 	(R3) HTTP/1.1 proxies that receive "Expect: 100-end2end"
> 	should respond with a failure status if the next-hop
> 	server is known to be HTTP/1.0.
> 	(R4) It doesn't seem to make much of a difference
> 	whether we insist on "Connection: expect" or not.
> 	(Unless I've blown the case analysis; would someone
> 	like to present a counterexample?).  Therefore, Expect
> 	should not be in the list of hop-by-hop headers.
> 	(R5) Section 8.2 clearly needs some more reworking.
> 	Section 13.11 probably needs to have an offending
> 	sentence removed.
> -Jeff
> P.S.: Do we really also need an "Expect: 100-hopbyhop"?  I'm not sure.
> However, the main difference is that it would be much simpler to
> implement.  The receiving server (proxy or otherwise) would could
> simply send a 100 Continue response whenever it pleased, or it could
> wait for the next-hop server to send the 100.  But I'm not sure
> it would serve any real purpose.
Received on Wednesday, 9 July 1997 17:22:57 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:40:20 UTC