- From: Henrik Frystyk Nielsen <frystyk@w3.org>
- Date: Tue, 10 Jun 1997 14:10:12 -0400
- To: John Franks <john@math.nwu.edu>
- Cc: "David W. Morris" <dwm@xpasc.com>, http-wg@cuckoo.hpl.hp.com, lawrence@agranat.com, rlgray@raleigh.ibm.com
At 12:50 PM 6/10/97 -0500, John Franks wrote: >> Yes, but unfortunately, HTTP/1.0 is broken and this is the only way to get >> PUT to work reliably. If you have ever tried to PUT across the Atlantic >> then you would know what I am talking about. >> > >Could you explain why trans-Atlantic POSTs would be more reliable with >100 Continue? I honestly don't understand. A typical HTTP request header is small enough to not force a TCP reset which may cause the HTTP response to get lost. As I said in my first mail, the problem is illustrated in the connection draft, section 8: The scenario is as follows: an HTTP/1.1 client talking to a HTTP/1.1 server starts pipelining a batch of requests, for example 15 on an open TCP connection. The server decides that it will not serve more than 5 requests per connection and closes the TCP connection in both directions after it successfully has served the first five requests. The remaining 10 requests that are already sent from the client will along with client generated TCP ACK packets arrive on a closed port on the server. This "extra" data causes the server's TCP to issue a reset which makes the client TCP stack pass the last ACK'ed packet to the client application and discard all other packets. This means that HTTP responses that are either being received or already have been received successfully but haven't been ACK'ed will be dropped by the client TCP. In this situation the client does not have any means of finding out which HTTP messages were successful or even why the server closed the connection. The server may have generated a In general, the problem can occur if the client sends a lot of data and the server closes the connection before having read the whole bit. >Here is my point. Superficially HTTP/1.0 POSTS seems to work well. >The proposed change seems likely to cause a dramatic degredation of >service. I suspect that it will always be fairly rare for a server to >reject a POST. Do we really have evidence that requiring 100 Continue >for every POST is a good thing? I don't believe I said that - the proposed resolution was: a client SHOULD wait for a 100 (Continue) code before sending the body but can send the whole thing if it believes that the server will react properly. This covers exactly the situation of small POST requests, you are referring to. I believe that being conservative in the specification guaranteeing correct behavior but allowing optimized applications is better than the other way round. Thanks, Henrik -- Henrik Frystyk Nielsen, <frystyk@w3.org> World Wide Web Consortium http://www.w3.org/People/Frystyk
Received on Tuesday, 10 June 1997 11:13:24 UTC