- From: John Franks <john@math.nwu.edu>
- Date: Tue, 10 Jun 1997 12:50:04 -0500 (CDT)
- To: Henrik Frystyk Nielsen <frystyk@w3.org>
- Cc: "David W. Morris" <dwm@xpasc.com>, http-wg@cuckoo.hpl.hp.com, lawrence@agranat.com, rlgray@raleigh.ibm.com
On Tue, 10 Jun 1997, Henrik Frystyk Nielsen wrote: > >> I think, we are coming down on the side saying that a client SHOULD wait > >> for a 100 (Continue) code before sending the body but can send the whole > >> thing if it believes that the server will react properly. > At 09:58 AM 6/10/97 -0700, David W. Morris wrote: > > >The whole notion of insertion of arbitrary delays offends me. The > >randomness of network latency makes that absurd. > I agree. On Tue, 10 Jun 1997, Henrik Frystyk Nielsen wrote: > Yes, but unfortunately, HTTP/1.0 is broken and this is the only way to get > PUT to work reliably. If you have ever tried to PUT across the Atlantic > then you would know what I am talking about. > Could you explain why trans-Atlantic POSTs would be more reliable with 100 Continue? I honestly don't understand. Currently there many POSTs and most of them are fairly small. This could change, but more likely large file submission will be done with PUT. Requiring a wait for 100 Continue with each POST will likely at least double the time and bandwidth of POSTS. Is there really evidence that this is a reasonable price to pay? Are HTTP/1.0 POSTS so broken that this draconian measure is called for? I have not heard of any complaints from service providers that HTTP/1.0 POSTS are a major problem. Are there any such complaints? Here is my point. Superficially HTTP/1.0 POSTS seems to work well. The proposed change seems likely to cause a dramatic degredation of service. I suspect that it will always be fairly rare for a server to reject a POST. Do we really have evidence that requiring 100 Continue for every POST is a good thing? John Franks Dept of Math. Northwestern University john@math.nwu.edu
Received on Tuesday, 10 June 1997 10:54:04 UTC