RE: ISSUE: MUST a client wait for 100 when doing PUT or POST re quests?

This example only seems to require that the client wait for 100 if it is
pipelining -- otherwise it won't know which (possibly non-idempotent)
requests have or haven't been processed, which is a correctness problem.
Other scenarios show that efficiency might suffer if the client didn't
wait for a 100.

Are there any other scenarios where _correctness_ is affected by not
waiting?

> ----------
> From: 	Henrik Frystyk Nielsen[SMTP:frystyk@w3.org]
> Sent: 	Tuesday, June 10, 1997 11:10 AM
> To: 	John Franks
> Cc: 	David W. Morris; http-wg@cuckoo.hpl.hp.com;
> lawrence@agranat.com; rlgray@raleigh.ibm.com
> Subject: 	Re: ISSUE: MUST a client wait for 100 when doing PUT or
> POST   requests?
> 
> At 12:50 PM 6/10/97 -0500, John Franks wrote:
> 
> >> Yes, but unfortunately, HTTP/1.0 is broken and this is the only way
> to get
> >> PUT to work reliably. If you have ever tried to PUT across the
> Atlantic
> >> then you would know what I am talking about.
> >> 
> >
> >Could you explain why trans-Atlantic POSTs would be more reliable
> with
> >100 Continue?  I honestly don't understand.
> 
> A typical HTTP request header is small enough to not force a TCP reset
> which may cause the HTTP response to get lost. As I said in my first
> mail,
> the problem is illustrated in the connection draft, section 8:
> 
>    The scenario is as follows: an HTTP/1.1 client talking to a
> HTTP/1.1
>    server starts pipelining a batch of requests, for example 15 on an
>    open TCP connection.  The server decides that it will not serve
> more
>    than 5 requests per connection and closes the TCP connection in
> both
>    directions after it successfully has served the first five
> requests.
>    The remaining 10 requests that are already sent from the client
> will
>    along with client generated TCP ACK packets arrive on a closed port
>    on the server. This "extra" data causes the server's TCP to issue a
>    reset which makes the client TCP stack pass the last ACK'ed packet
> to
>    the client application and discard all other packets. This means
> that
>    HTTP responses that are either being received or already have been
>    received successfully but haven't been ACK'ed will be dropped by
> the
>    client TCP. In this situation the client does not have any means of
>    finding out which HTTP messages were successful or even why the
>    server closed the connection. The server may have generated a
> 
> In general, the problem can occur if the client sends a lot of data
> and the
> server closes the connection before having read the whole bit.
> 
> >Here is my point.  Superficially HTTP/1.0 POSTS seems to work well.
> >The proposed change seems likely to cause a dramatic degredation of
> >service.  I suspect that it will always be fairly rare for a server
> to
> >reject a POST.  Do we really have evidence that requiring 100
> Continue
> >for every POST is a good thing?
> 
> I don't believe I said that - the proposed resolution was:
> 
>    a client SHOULD wait for a 100 (Continue) code before sending the
>    body but can send the whole thing if it believes that the server
>    will react properly.
> 
> This covers exactly the situation of small POST requests, you are
> referring
> to. I believe that being conservative in the specification
> guaranteeing
> correct behavior but allowing optimized applications is better than
> the
> other way round.
> 
> Thanks,
> 
> Henrik
> --
> Henrik Frystyk Nielsen, <frystyk@w3.org>
> World Wide Web Consortium
> http://www.w3.org/People/Frystyk
> 

Received on Tuesday, 10 June 1997 16:52:16 UTC