- From: Erik Moller <emoller@opera.com>
- Date: Wed, 23 Mar 2011 13:49:14 +0100
Ok, firstly let me just say that I'm thrilled about the peer to peer stuff moving forward, great work Ian. I wish I had a little more time to get stuck into this now and read the specs a bit more thoroughly, but I'll try to reply to all your comments that I generated. (Amazingly ö still seems to be causing troubles in 2011.) This is just from my very narrow browser-game perspective, so don't read anything into it about video conferencing.. I choose to ignore that. On 3/18/11 5:45 AM, Ian Hickson wrote: > On Tue, 4 May 2010, Erik M?ller wrote: >> I'm an old gamedev recently turned browserdev so this is of particular >> interest to me, especially as I'm currently working on WebSockets. >> WebSockets is a nice step towards multiplayer games in browsers and will >> be even better once binary frames are speced out but as Mark says >> (depending on the nature of the game) gamedevs are most likely going to >> want to make their own UDP based protocol (in client-server models as >> well). Has there been any discussions on how this would fit under >> WebSockets? > There has not, as far as I'm aware. > > PeerConnection could be used by a server as well, of course. > > I agree it doesn't make sense to try to cram anything more into WebSockets... I think it was just me and Mark F. that were eager to get things moving on the peer to peer side and WebSockets seemed like the only thing that had a bit of momentum back then. > On Tue, 1 Jun 2010, Erik M?ller wrote: >> The majority of the on-line games of today use a client/server model >> over UDP and we should try to give game developers the tools they >> require to create browser based games. For many simpler games a TCP >> based protocol is exactly what's needed but for most real-time games a >> UDP based protocol is a requirement. Games typically send small updates >> to its server at 20-30Hz over UDP and can with the help of entity >> interpolation and if required entity extrapolation cope well with >> intermittent packet loss. > Does PeerConnection address this use case to your satisfaction? > > Note that currently it does not support binary data, but I've built in an > extension mechanism to make this easy to add in the future. > It is looking very promising at least. I won't say yes because I know there will always be things missing once you start using it in the real world. I guess doing some extra investigation whether those additional 20 (?) bytes per packet are really necessary would be good. I'll have to leave that to someone with more expertise in that area though. > > On Wed, 2 Jun 2010, Erik M?ller wrote: >> No it can't be UDP, it'll have to be something layered on top of UDP. >> One of the game guys I spoke to last night said "Honestly, I wish we >> just had real sockets. It always seems like web coding comes down to >> reinventing a very old wheel in a far less convenient or efficient >> manner." To some extent I agree with him, but there's the security >> aspect we have to take into account or we'll see someone hacking the CNN >> website and injecting a little javascript and we'll have the DDOS attack >> of the century on our hands. > For the data UDP media stream in PeerConnection I tried to make it as pure > UDP as I could, while still being safe and still being extensible. The > packets are (doubly) obfuscated to prevent cross-protocol attacks, and you > can only send data to an end-point that negotiated a key via SDP > offer/answer and participated in ICE to select how the packets are routed, > but beyond that it's as raw as I could make it. Hopefully it's enough. > It is looking good. >> The reason I put down "Socket is bound to one address", "Reliable >> handshake", "Reliable close handshake" and "Sockets open sequentially" >> was for that exact reason, to try to make it "DOS and tamper safe". The >> "Sockets open sequentially" means that if you allocate two sockets to >> the same server the second socket will wait for the first one to >> complete its handshake before attempting to connect. > I haven't done this, but since the other server has to participate in the > ICE processing, and can delay the start of that indefinitely, it seems > that we're safe here. > Agreed. > On Thu, 3 Jun 2010, Erik M?ller wrote: >> On Wed, 02 Jun 2010 19:48:05 +0200, Philip Taylor wrote: >>> So they seem to suggest things like: >>> - many games need a combination of reliable and unreliable-ordered and >>> unreliable-unordered messages. >> One thing to remember here is that browsers have other means for >> communication as well. I'm not saying we shouldn't support reliable >> messages over UDP, but just pointing out the option. I believe for >> example World of Warcraft uses this strategy and sends reliable traffic >> over TCP while movement and other real-time data goes over UDP. > That would indeed make sense. > > >>> - many games need to send large messages (so the libraries do >>> automatic fragmentation). >> Again, this is probably because games have no other means of >> communication than the NW-library. I'd think these large reliable >> messages would mostly be files that need to be transferred >> asynchronously for which browsers already have the tried and tested >> XMLHttpRequest. > Are the large messages always reliable messages? > I can of course only speak from my experience from the games I've worked on, but these large messages have typically been updated content. Textures, level data etc. More recently using a bit torrent system has been popular for distributing updated content. So, yeah, those large messages have always been reliable. >>> - many games need to efficiently send tiny messages (so the libraries >>> do automatic aggregation). >> This is probably true for many other use-cases than games, but at least >> in my experience games typically use a bit-packer or range-coder to >> build the complete packet that needs to be sent. But again, it's a >> matter of what level you want to place the interface. > This seems relatively easy to layer on top of the current protocol in the > spec, but if we find it commonly used we can also add it explicitly as an > extension. > I'd suggest just keeping the API as simple as possible. With JavaScript kicking ass and taking names in terms of performance the last couple of years it seems less necessary to build things like that into the API. Besides, it seems every NW-engineer have their own favourite bitpacker. >>> Perhaps also: >>> - Cap or dynamic limit on bandwidth (you don't want a single web page >>> flooding the user's network connection and starving all the TCP >>> connections) > Not really sure what the spec should say about this. > > >>> - Protection against session hijacking >> Great > The spec uses an encryption mechanism to prevent this. > > >>> - Protection against an attacker initiating a legitimate socket with a >>> user and then redirecting it (with some kind of IP (un)hijacking) to a >>> service behind the user's firewall (which isn't a problem when using >>> TCP since the service will ignore packets when it hasn't done the TCP >>> handshake; but UDP services might respond to a single packet from the >>> middle of a websocket stream, so every single packet will have to be >>> careful not to be misinterpreted dangerously by unsuspecting >>> services). > The packets are masked so that you couldn't do anything but DOS attacks in > this kind of scenario. (And you can do those already with TCP.) > > On Thu, 10 Jun 2010, Erik M?ller wrote: >> As discussed the following features/limitations are suggested: -Same API >> as WebSockets > I don't see how that would work. I've made them as similar as possible, > but I don't think it makes sense to go further. > > Agreed. >> with the possible addition of an attribute that allows the >> application developer to find the path MTU of a connected socket. > What's the use case? > The use case is simply that intermediaries can have different MTUs and exceeding those may cause them to just unconditionally drop the packets. I haven't verified this recently though, that's just the way it used to be in the days... >> -Max allowed send size is 65,507 bytes. > Currently 65470, to handle the various headers used (see the spec). > > >> -Socket is bound to one remote address at creation and stays connected >> to that host for the duration of its lifetime. > I've specced it in such a way that ICE could rebind the connection later; > is that ok? > I can't remember why I wrote that, but I assume it had something with security to do, but It's not my area of expertise so feel free to ignore that. >> -IP Broadcast/Multicast addresses are not valid remote addresses and >> only a set range of ports are valid. > I've left this up to the ICE layer. > > >> -Reliable handshake with origin info (Connection timeout will trigger >> close event.) > Not sure what the handshake should do here. Could you elaborate? > > Also there's currently no origin protection for peer-to-peer stuff (there > is for the STUN/TURN part; the origin is the long-term credential). We > could certainly add something; how should it work? What are the attack > scenarios we should consider? > Not entirely sure, I suppose in the special case where one of the peers is the origin server you could do more? >> -Automatic keep-alives (to detect force close at remote host and keep >> NAT traversal active) > I've left that up to the ICE layer. > > >> -Reliable close handshake > This can be done over the signaling layer independent of the UDP channel. > > >> -Sockets open sequentially (like current DOS protection in WebSockets) >> or perhaps have a limit of one socket per remote host. >> -Cap on number of open sockets per host and global user-agent limit. > UDP doesn't really have sockets, so I don't really know how to do this. > What about the ICE layer, is there anything that needs to be done to prevent flooding the server with requests there? I need to read up on ICE again. >> Some additional points that were suggested on this list were: -Key >> exchange and encryption If you do want to have key exchange and >> encryption you really shouldn't reinvent the wheel but rather use a >> secure WebSocket connection in addition to the UDP-WebSocket. Adding key >> exchange and encryption to the UDP-WebSocket is discouraged. > Not really sure what this means. > Just keep it simple and clean and people can implement what they want in JS atop that. > >> -Packet delivery notification to be a part of the API. Again this is >> believed to be better left outside the UDP-WebSockets spec and >> implemented in javascript if the application developer requires it. > > Agreed. > > > > On Fri, 11 Jun 2010, Erik M?ller wrote: >>> I'd recommend doing some real-world testing for max packet size. >>> Back when the original QuakeWorld came out it started by sending a >>> large connect packet (could be ~8K) and a good number of routers would >>> just drop those packets unconditionally. The solution (iirc) was to >>> keep all packet sends below the Ethernet max of 1500 bytes. I haven't >>> verified this lately to see if that's still the case, but it seems >>> real-world functionality should be considered. >> Absolutely, that's why the path-MTU attribute was suggested. The ~64k >> limit is an absolute limit though at which sends can be rejected >> immediately without even trying. > Could you elaborate on this use case? > Like I said earlier, this might not be an issue today, but you used to be able to see packets over a certain size getting unconditionally dropped if some dodgy router happened to be on the path. If that's still an issue it would be useful to know what the real MTU for the path is. But again... perhaps it's better to keep the API simple and leave that up to users. >>> If WebSocket supports an encrypted and unencrypted mode, why would the >>> real-time version not support data security and integrity? >> The reasoning was that if you do need data security and integrity the secure >> websocket over TCP uses the same state-of-the-art implementation as the >> browsers already have implemented. Secure connections over UDP would either >> require a full TCP over UDP implementation (to use TLS) or a second >> implementation that would need to be maintained. That implementation would be >> either a very complex piece or software or clearly inferior to that users are >> accustomed to. >> So what's a good use-case where you want a secure connection over UDP and >> cannot use a second TLS connection? > Games, if you want to prevent some forms of cheating. I don't necessarily > agree that we have to do anything as complex as TLS (or DTLS) though. > Encrypting the data stream gets us a long way there; we can add some > integrity protection and replay protection reasonably easily too. Since we > have a (presumed secure) signaling channel, a lot of the complexity of > (e.g.) DTLS is unnecessary. > Possibly yes, but my gut feeling is keep it simple. Any game being really serious about security is at least going to have you sign in to a secure login server before kicking any UDP connections off, so whatever encryption and shared secrets they need to make the UDP connection tamper proof can be negotiated over that connection.
Received on Wednesday, 23 March 2011 05:49:14 UTC