- From: Ian Hickson <ian@hixie.ch>
- Date: Fri, 27 May 2011 23:02:34 +0000 (UTC)
- To: Adrian Bateman <adrianba@microsoft.com>
- cc: "Web Applications Working Group WG (public-webapps@w3.org)" <public-webapps@w3.org>
On Fri, 27 May 2011, Adrian Bateman wrote: > > I'm pleased to see the changes in the WebSockets API for binary message > support. I'm a little confused by this text: > > When a WebSocket object is created, its binaryType IDL attribute must > be set to the Blob interface object associated with the same global > object as the WebSocket constructor used to create the WebSocket object. > On getting, it must return the last value it was set to. On setting, if > the new value is either the Blob or ArrayBuffer interface object > associated with the same global object as the WebSocket constructor used > to create the WebSocket object, then set the IDL attribute to this new > value. Otherwise, throw a NOT_SUPPORTED_ERR exception. > > I don't entirely follow what this is saying It means you do this: mysocket.binaryType = Blob; ...if you want blobs, and: mysocket.binaryType = ArrayBuffer; ...if you want array buffers. > but we'd prefer (How do you know what you're prefer if you don't know what it's saying?) > the binaryType to be a DOMString in the same fashion that the > responseType is in XHR2. Is there a reason for this to be an object? > We'd prefer consistency. Consistency is good when it makes sense. However, I don't think XHR is a good parallel here. XHR has all kinds of additional complexities, for example it lets you get a string, whereas here string vs binary is handled at the protocol level and so can't ever be confused. However, if we want consistency here anyway, then I'd suggest we change XHR to use the actual type values just like WebSockets. It would IMHO lead to much cleaner code. -- Ian Hickson U+1047E )\._.,--....,'``. fL http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Received on Friday, 27 May 2011 23:02:58 UTC