- From: Henrik Nordstrom <henrik@henriknordstrom.net>
- Date: Sat, 18 Jul 2009 22:56:22 +0200
- To: Adrien de Croy <adrien@qbik.com>
- Cc: HTTP Working Group <ietf-http-wg@w3.org>
lör 2009-07-18 klockan 16:58 +1200 skrev Adrien de Croy: > First para regarding the purpose of the status is misleading. It > implies one can avoid sending a message body. Yes, it means indirectly from the requirement of monitoring the connection for errors etc. > It doesn't go into the > grisly details of how one would actually avoid sending the message body > and the associated problems Because this is specified elsewhere. > a) try your POST with a Content-Length: 0 like IE does (which causes > other problems) Not in this context. > b) use chunking for your request bodies and prematurely abort the send > with a 0 chunk. c) Closing the connection when it's determined that sending the message body is undesireable. > methods or send the whole message. If you're trying to use > connection-oriented auth (like many millions of people are every day) > then you're basically hosed Which is why you should not use connection-oriented auth with requests having Content-Length. I don't see how that's related to 100 Continue or Expects. Connection-oriented auth is plain broken in the message oriented context of HTTP. > Under "Requirements for HTTP/1.1 origin servers" > > clause 3 contradicts the MUST requirement in clause 1. e.g. > > o An origin server MAY omit a 100 (Continue) response if it has > already received some or all of the request body for the > corresponding request. > > conflicts with > > o Upon receiving a request which includes an Expect request-header > field with the "100-continue" expectation, an origin server MUST > either respond with 100 (Continue) status and continue to read > from the input stream, or respond with a final status code. True. But the intentions is quite obvious if one reads both. > the way I read the first clause is that the server must immediately > reply either with 100 continue or the final status (e.g an auth > challenge etc). Not my reading. > That's the whole point of 100 continue. However the > 3rd clause implies if you've already read something it's ok to wait > around for the final status. iff the client has aready sent you some part of the request body, in which case the server knows the client has already left it's expectation of a 100 Continue response. > Under "Requirements for HTTP/1.1 proxies" > > o Proxies SHOULD maintain a cache recording the HTTP version numbers > received from recently-referenced next-hop servers. > > this is problematic. I've seen sites that respond with varying HTTP > versions in the responses (even with same server). Which is broken servers. > Presumably due to versions being set in different scripts. and servers not implementing HTTP for those scripts.. > It's impossible to cache version vs site in these cases. No, but the outcome won't be the most favorable for those servers, but won't break things in a general sense. > Sure, can cache the rest, but the > overhead of keeping a cache for this purpose seems overly heavy. Partially agreed. But in case of Expect: 100-continue it's just an optimization to fail the expectation early if it's known within reasonable doubt that the expectation will fail. Additionally nothing in the specs says what constitutes a "recently-referenced next-hop" and is left as an implementation detail. > I know that the issue of Expects has been thrashed on this list > already. I understand the theory of why such a mechanism is required, > but in the case of 100 continue (the only one specified) it's so > problematic I'd be very surprised if any client ever implements it. > > Does anyone know of any (IMO suicidal) client that uses Expects? There is quite many using it today. And many of those isn't prepared to deal with expectation failures.. (learnt the hard way when implementing Expect...) On a side note I have also been thinking about this version cache topic lately in another topic. Will post about that later today.. Regards Henrik
Received on Saturday, 18 July 2009 20:57:05 UTC