- From: Willy Tarreau <w@1wt.eu>
- Date: Wed, 24 Apr 2013 08:31:22 +0200
- To: "Adrien W. de Croy" <adrien@qbik.com>
- Cc: Mark Nottingham <mnot@mnot.net>, Amos Jeffries <squid3@treenet.co.nz>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
Hi Adrien, On Wed, Apr 24, 2013 at 04:39:16AM +0000, Adrien W. de Croy wrote: > I'm really struggling to see what benefit can be derived by a client in > knowing whether a server supports 100 continue or not. So to me > Expects: 100-continue is a complete waste of space. I've never seen one > so I guess implementors by and large agree. The first place I saw lots of them (100% of the requests) were between applications using web services. All the requests were POST and all of them were using 100-continue. That's how I discovered that it was a non final status code and that haproxy didn't handle it properly at this time... > Regardless of 100 continue being transmitted, the client has to send the > payload if it wants to reuse the connection. The only early-out options > involve closing the connection. ... or using chunked-encoding. > There was quite a lot of discussion about this in the past, and my > understanding was that 100 continue couldn't be used to negotiate > whether or not the payload would be sent. But this can be quite useful with a webmail for example, where you don't want to upload your mail with attached documents to discover that your session has expired and that you must upload again! > The outcome of this > discussion was not satisfactory IMO, since the "answer" was for the > client to send request bodies always chunked, and send a 0 chunk if it > needed to abort early. Yes indeed, this is the only reliable way of using it. > This IMO is unsatisfactory because it does not indicate that the client > didn't send the payload, and a whole heap of intermediary agents may act > on that as if it were complete. > > So for me therefore there's still a hole in the spec around this - > chunking doesn't have a way to indicate aborting the body. And there's > no way to pre-authorization transmission of a request body. It's not a big problem because if the server says it rejects the request, it will just drop the payload and it can safely be transmitted and truncated. > I don't see how a server can return a success status code to a message > it didn't even receive yet. It will only base its decision on credentials or everything found in headers (eg: auth, cookies, advertised content-length, ...). > Returning a 417 due to expectation not met > is just extra noise and RTT, and the connection needs to be closed > anyway or the payload sent. Except it's sometimes hard for the client to stop uploading something that was already sent. > So, what would we really lose if 100-continue were deprecated? and what > would we gain. First, it's the only way for the client to send non-idempotent requests over existing connections without the risk that they expire during the upload and that they don't know if the server could process them. If you want to use a connection pool, you have no other choice. Second, it's true that it's annoying in high latency networks as it adds an RTT. I think that clients could have a threshold on the amount of data below which they don't use it (unless they're reusing an existing connection). Regards, Willy
Received on Wednesday, 24 April 2013 06:33:07 UTC