- From: Michael Carter <michael.carter@kaazing.com>
- Date: Thu, 19 Jun 2008 15:32:13 -0700
> I think we should have both a pure TCPSocket, and also a ServerSocket > that keeps the same connection as the original document was downloaded > from. The ServerSocket will make it very easy for web developers to > work with, since the ServerSocket object will be available both from > the server side and the client side while the page is being generated. > I am posting a separate proposal that describes my idea soon. > I don't see the benefit of making sure that its the same connection that the page was "generated" from. > > > Actually, I've already tested this protocol against some typical forward > > proxy setups and it hasn't caused any problems so far. > > Could you test keeping the same connection as the webpage was fetched > from, open? So that when the server script responds with its HTML-code > - the connection is not closed, but used for kept alive for two way > communications? If you establish a Connection: Keep-Alive with the proxy server, it will leave the connection open to you, but that doesn't mean that it will leave the connection open to the back end server as the Connection header is a single-hop header. > This gives the following benefits: > > The script on the server decides if the connection should be closed or > kept open. (Protection against DDOS attacks) With the proposed spec, the server can close the connection at any point. > This allows implementing server side listening to client side events, > and vice versa. If this works, then the XMLHttpRequest object could be > updated to allow two way communications in exactly the same way. > The previously proposed protocol already allows the server side listening to client side events, and vice versa. Rather or not to put that in the XMLHttpRequest interface is another issue. I think making XHR bi-directional is a bad idea because its confusing. Better to use a brand new api, like WebSocket. > Also, by adding a SessionID header sent from the client (instead of > storing session ids in cookies), the web server could transparently > rematch any client with its corresponding server side process in case > of disconnect. > Isn't that what cookies are supposed to do? Regardless, it sounds like an application-level concern that should be layered on top of the protocol. > > >> I'm thinking here that this proposal is basically rewriting the CGI > >> protocol (web server handing off managed request to custom scripts) with > the > >> ONLY difference being the asynchronous nature of the request. Perhaps > more > >> consideration might be given to how the CGI/HTTP protocols might be > updated > >> to allow async communication. > > Rewriting the HTTP spec is not feasible and I'm not even convinced its a > > good idea. HTTP has always been request/response so it would make a lot > more > > sense to simply use a new protocol then confuse millions of > > developers/administrators who thought they understood HTTP. > > The HTTP spec has these features already: > > 1: Header: Connection: Keep-Alive > 2: Status: HTTP 101 Switching Protocol > > No need to rewrite the HTTP spec at all probably. You can't use HTTP 101 Switching Protocols without a Connection: Upgrade header. I think you'll note that the proposal that started this thread uses just this combination. > > >> Having said that I still see a very strong use case for low-level > >> client-side TCP and UDP. There are ways to manage the security risks > that > >> require further investigation. Even if it must be kept same-domain that > is > >> better than creating a new protocol that won't work with existing > services. > >> Even if that sounds like a feature - it isn't. There are better ways to > >> handle access-control for non-WebConnection devices than sending garbage > to > >> the port. > > If we put the access control in anything but the protocol it means that > we > > are relying on an external service for security, so it would have to be > > something that is completely locked down. I don't really see what the > > mechanism would be. Can you propose a method for doing this so as to > allow > > raw tcp connections without security complications? > > TCPConnections are only allowed to the server where the script was > downloaded from (same as Flash and Java applets). A DNS TXT record can > create a white list of servers whose scripts can connect. Also the > TCPConnection possibly should be allowed to connect to local network > resources, after a security warning - but only if the server has a > proper HTTPS certificate. > How would a DNS TXT record solve the problem? I could register evil.com and point it at an arbitrary ip address and claim that anyone who wants to can connect. > >> It's more harmful because an img tag (to my knowledge) cannot be used to > >> brute-force access, whereas a socket connection could. With the focus on > >> DDOS it is important to remember that these sockets will enable full > >> read/write access to arbitrary services whereas existing methods can > only > >> write once per connection and generally not do anything useful with the > >> response. > > What do you mean by brute-force access, and how could the proposed > protocol > > be used to do it. Can you provide an example? > > With the security measures I suggest above, there is no need for > protection against brute force attacks. Most developers only use one > server per site, and those that have multiple servers will certainly > be able to add a TXT-record to the DNS. > I don't actually understand which part of the specification you want to change aside from doing the access control in a DNS TXT record instead of the protocol. -Michael Carter -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20080619/973dc2c4/attachment.htm>
Received on Thursday, 19 June 2008 15:32:13 UTC