- From: Jamie Lokier <jamie@shareable.org>
- Date: Thu, 10 Sep 2009 13:46:18 +0100
- To: Křištof Želechovski <giecrilj@stegny.2a.pl>
- Cc: 'Martin J. Dürst' <duerst@it.aoyama.ac.jp>, uri-review@ietf.org, hybi@ietf.org, uri@w3.org, 'David Booth' <david@dbooth.org>
Křištof Želechovski wrote: > I think the idea to use Web Sockets on the server is void; the server can > use TCP/IP at will. Nice theory. I believe you have correctly described the intentions of the WebSockets protocol proposers (as I understand them), and that the theory is denying reality. It's wrong. A server cannot use TCP/IP at will in two scenarios: 1. A server must use WebSockets if it's accessing services which a different provider has only made available via WebSockets, due to the provider only intending the service for web browsers. For example, at one time something like Google Maps' image back-end would fit this pattern: only intended for one web application, but actually used by various third parties because it's useful. Judging by the modern trend to 'mashups', expect this practice to become widespread 2. When a service is provided by WebSockets to support a web browser, and a requirement emerges to provide the same service to other programs. Many implementors will use the path of least resistance, which is to continue offering using the service over WebSockets in the new context, and require the clients to use generic non-browser WebSockets code. That is simpler than specifying and implementing a second protocol for the same service. For examples of where this has happened before, see SOAP. It runs over HTTP simply to reuse deployed and well understood code and infrastructure. In principle it could run over raw TCP/IP or a simple framing protocol, but that's not done in practice. Expect the same to occur with WebSockets if it is widely used by web applications. If only because of familiarity and duplication avoidance. -- Jamie
Received on Thursday, 10 September 2009 12:47:10 UTC