- From: Henrik Nordström <henrik@henriknordstrom.net>
- Date: Fri, 14 May 2010 20:09:36 +0200
- To: Jamie Lokier <jamie@shareable.org>
- Cc: Wenbo Zhu <wenboz@google.com>, ietf-http-wg@w3.org
fre 2010-05-14 klockan 18:33 +0100 skrev Jamie Lokier: > It is possible to implement servers where the response is an > incremental function of the request, and if it worked reliably, that > would actually be useful. There is applications operating in this manner, but the HTTP transport isn't well suited for this as you have no control over any buffering done by intermediary servers or even clients http frameworks operate. > As far as I can tell, it would be within the HTTP specs if a new > client was deliberately made which supported bidirectional streaming > POSTs over a single connection, and a new server took advantage of > that when it knew the client supported it. Yes, and such clients do exists, even if most select to either do it over an SSL connection or by using two parallell requests. > Then the sticking point would be what proxies do. Depends on the purpose of the proxy. > That requirement says nothing about *non-error* codes. What part of > accepting those while continuing to send the request is broken? Or > alternatively of not accepting those? None, other than me reading a bit too quickly. > Do you think the above idea, of a client supporting it and a server > making use of it if it knows the particular client supports it, would > work now? That would halve the number of connections needed for > comet-style interactions. It will work in most environments, but you will find networks where intermediary proxies do buffer considerable amount of data or which may even serialize it down to request->response pattern. HTTP does not provide any guarantee that full duplex communication will work. > I mostly agree with Henrik, but if it's already common practice for > clients to abort when they see part of a non-error response, then I'm > in favour of the spec reflecting reality, instead of ignoring it. Depends on where you draw the boundary for the client. If you include the end user behavior in the client then most users do not wait for their browser to finish sending something if they have already got the response they are expecting. Most if not all clients if left alone by their user will happily continue sending the request without interruption provided the server does not close the connection on them. But some clients or intermediaries MAY get confused and think they are done when the server response is completed. This is however not something which is supported by the specifications and the cases I have seen has been pure bugs and not intentional. Regards Henrik
Received on Friday, 14 May 2010 18:10:42 UTC