- From: Scott Lawrence <lawrence@agranat.com>
- Date: Tue, 10 Jun 1997 16:30:17 -0400
- To: http-wg@cuckoo.hpl.hp.com
I never would have dreamed that this would generate so much heat.
HFN> I think, we are coming down on the side saying that a client SHOULD wait
HFN> for a 100 (Continue) code before sending the body but can send the whole
HFN> thing if it believes that the server will react properly.
>>>>> "JF" == John Franks <john@math.nwu.edu> writes:
JF> Currently there many POSTs and most of them are fairly small. This
JF> could change, but more likely large file submission will be done with
JF> PUT.
The semantics of PUT and POST of a file are quite different, both
will be used but for different things. One example I know of now is
that the Internet Printing Protocol working group is looking at POST
as the mechanism for submitting multipart operations, one or more of
which is a file to be printed.
JF> Requiring a wait for 100 Continue with each POST will likely at
JF> least double the time and bandwidth of POSTS.
You weaken your objection; a 100 Continue response is very small
(most other headers, including even Date, are not required with it).
No additional transmission is required of the client. The net
effect on bandwidth is just the size of the 100 Continue response
itself, which could be as few as 16 bytes (HTTP/1.1 100CR-LF-CR-LF).
The additional time is normally one round trip time, or worst case
the timeout before the client decides to just go ahead.
JF> Is there really evidence that this is a reasonable price to pay?
JF> Are HTTP/1.0 POSTS so broken that this draconian measure is called
JF> for? I have not heard of any complaints from service providers
JF> that HTTP/1.0 POSTS are a major problem. Are there any such
JF> complaints?
Not from a service provider, but from a server vendor.
I posted some discussion of this a while ago, which Henrik provided
a pointer to, but I'll repeat part of it now.
Our server provides the capability to use different access control
for serving vs. submitting a form. This is handy in that the same
page can be used to display the current state of the server and to
modify it by changing something and submitting it. Since the
submission may require new authentication, it _saves_ bandwidth if
the server can send the '401 Unauthorized' response before the POST
body has been sent.
Client Server
| |
>1 |-> GET /form.html HTTP/1.1 -------->|
| |
|<------------- HTTP/1.1 200 Ok <----|
| body contains form |
| |
>2 |-> POST /form.html HTTP/1.1 ------->|
| (no authorization, no body) |
| |
|<---- HTTP/1.1 401 Unauthorized <---|
| |
| |
>3 |-> POST /form.html HTTP/1.1 ------->|
| (with authorization) |
| |
|<------- HTTP/1.1 100 Continue <----|
| |
|-> (form data) -------------------->|
| |
|<------------- HTTP/1.1 200 Ok <----|
| ... |
Request 1 is sent for a resource which contains a form, but which is
not protected by any realm. The resource is returned, and the
client has the opportunity to note that the server is 1.1.
Request 2 is sent to post the form, but submission of the form is
protected by some realm, so this request is rejected. The server
can determine this before the request body is sent.
Request 3 is the retry of 2 with authorization information; after
the headers are received, the server returns the 100 Continue to
indicate to the client that it is ok to proceed with the request
body.
If the client had sent the form data immediately with Request 2, it
would have been just discarded by the server - a waste of time and
bandwidth, since the body must be resent with the authentication
data as part of request 3.
JF> The proposed change seems likely to cause a dramatic degredation of
JF> service. I suspect that it will always be fairly rare for a server to
JF> reject a POST. Do we really have evidence that requiring 100 Continue
JF> for every POST is a good thing?
Dramatic? What is the evidence for that?
--
Scott Lawrence EmWeb Embedded Server <lawrence@agranat.com>
Agranat Systems, Inc. Engineering http://www.agranat.com/
Received on Tuesday, 10 June 1997 13:34:47 UTC