- From: David W. Morris <dwm@shell.portal.com>
- Date: Thu, 7 Dec 1995 22:19:35 -0800 (PST)
- To: http working group <http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com>
On Thu, 7 Dec 1995, Jeffrey Mogul wrote: > One answer would be "it doesn't matter" (more precisely, "it's > up to the server implementer"). Take your first example: if the > client wants to PUT a zillion bytes to a location that requires > authentication, then does it really matter why it fails? Either > way, it can't be done. Well there is the interesting situation where the server requires authentication for a PUT AND also can't accept 1meg of data. The way WWW/HTTP authentication works, the client may not know that authentication is required until attempting the PUT and getting the 401 unauthorized response. HENCE, we have a situation where the client could be presented two possible errors, one which it can possibly correct and one which it can't ever correct. In normal transaction flow authentication would be verified and the semantics of the request would be verified. YET, what is the point of a server issuing an unauthorized response when it could/should/might know it's is never going to process the transaction anyway. As an end user, I'd be mightly annoyed if I did what I believed was valid, was delayed for the password prompt in response to the 401 UNAUTHORIZED and possibly some substantial delay because of the bytes in the pipe the first time only to be then told that the size of the data to PUT was too big. >From a protocol perspective it needs to be valid to issue the failures in either order because the size failure may be detected only when a processing 'CGI' program responds while the server detects the unauthorized situation. It would be good however to encourage presentation of fatal (can't be overcome) errors before correctable failures, when possib To backup .... as I understand the discussion, the purpose of the 2 phase send was to increase the probablility that the client has a chance to receive the response. The impression was that some/many TCP implementations may close a connection in such a way that the response is flushed and never seen from the client. The second lesser concern was that by using a short delay, there was less waste of bandwidth in the failure case. I like your (Jeffrey Mogul) suggestion for requiring the explict two phase request from the client in the quiet close case. I would further extend your suggestion to get rid of the timeout or make it quite large and only allow the client to continue when the response arrives. Dave Morris
Received on Thursday, 7 December 1995 22:24:29 UTC