- From: Ian Hickson <ian@hixie.ch>
- Date: Fri, 4 Dec 2009 01:47:24 +0000 (UTC)
- To: Sebastian Andersson <bofh69@gmail.com>
- Cc: public-webapps@w3.org
On Wed, 25 Nov 2009, Sebastian Andersson wrote: > > I have a few problems with the WebSocket API as it is described. > > If the client is sending too much data to the server, I don't want it to > be disconnected just because some buffer is temporarily full, but that > is the required semantics of the API. If my application must send out a > lot of data, I don't want my applications to have to guess the > networking bandwidth, the browser's buffer quota and the server's > capacity and throttle my sending. Let me, the developer, be able to say > what I want to do if the server/network can't swallow my messages fast > enough. If you have a finite but large amount of data to send, then just send it. The client will buffer it for you and send it as far as possible. This should not cause the connection to close unless you're sending _so_ much data that the client is unable to handle it (but then it probably would have prevented you from allocating the memory in the JS space too). If you have an infinite amount of data to send, just send it so that the bufferedAmount attribute is non-zero but small and not growing. That results in sending data at the highest possible bandwidth with nearly the lowest possible latency. > One way is to let Send return false without closing the WebSocket, > increase the ready state with an extra state (SendBufferFull?) and an > extra event handler. Throw an exception if one tries to send a message > when the ready state is SendBufferFull. This would result in most (naive) authors writing code that appears to just randomly drop packets every now and then, which is worse than closing the connection, IMHO. > The send function should in that case also send the message later (since > it would otherwise be hard to implement this functionality efficiently > on most OSes). Not sure what you mean. > There should be a described way of sending options to the protocol > implementation. Ie how to enable/disable something like Nagle's > algorithm for a TCP protocol implementation. Perhaps add an optional > third argument to the constructor instead of having to encode options > into the url or protocol strings. This kind of thing would make sense for a future version, but I think we should keep the first version relatively simple. Most Web authors aren't going to have any idea whether they should enable or disable Nagle's algorithm, and it seems likely that we'd therefore just end up with half of the Web pages with it enabled and the other half with it disabled, with no correlation to whether they need it or not. > The document does not describe what happens when a message is received > when no event handler has been associated with onmessage. There are at > least three choices for an implementation; > A) Throw away the message. > B) Enqueue the message until an event handler has been added. > C) Don't read from the socket until an event handler has been added. Actually the spec defines it as (A) - the event is fired, but no listeners are there to receive it, so nothing happens. > Only the C option it acceptable to me and that allows the application to > implement A and B if that is needed. If my application can not process > received messages fast enough, I want my application to be able to > throttle the amount of messages sent and a common way to do that is to > simply stop receiving data (remove the handler in this case) until one > is ready to receive more. The client's and server's buffers will fill up > and the server application can take action on its end. Since that can > always happen, it doesn't burden the server with any extra logic. This isn't an unreasonable idea, though I am uncomfortable with making this dependent on whether an event listener is attached. I think a better solution here would be to provide a feature in a future version that allows the connection to be "paused", e.g. socket.pause() and socket.resume(). > Although it is partly outside of the scope of the document, I still > would like to raise the question about why creating a new protocol and > not allowing plain TCP? It would be a security nightmare (e.g. it would mean a hostile Web site, when visited by a corporate user, could connect to an arbitrary HTTP server behind the intranet firewall and read its files). > I understand the need to limit the amount of damage a browser can do > via malicious javascript, but why not simply use one of the existing > limits of the current networking capable web technologies? > With Java applets you can connect to TCP ports on the same hostname > that served the applet (or is it the same IP?). That allows attacks on shared hosting providers, and prevents cross-site websocket access. > With flash, you can connect to any server and any port as long as the > application can first download a policy file from the same IP number. Flash's security model has had so many security holes reported on it that I really don't want to try to emulate it. It also requires two sets of TCP connections per socket, and has some scary race conditions. It also requires talking to two different ports, which is dubious in shared hosting situations. > I'm not familiar with Silverlight, I only know it is possible to connect > to TCP ports with it. I'm not sure what their security model is either. > The "origin" concept is a great way to limit malicious javascript, but > so is flash policy files. If a policy file must be downloaded from a > specific port/URL, before the application is allowed to connect and the > browser caches the result for a while that would limit DoS attacks quite > well while at the same time make the WebSocket API powerfull enough to > make use of old protocols. What's wrong with the way WebSocket does it? -- Ian Hickson U+1047E )\._.,--....,'``. fL http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Received on Friday, 4 December 2009 01:47:54 UTC