- From: Frode Børli <frode@seria.no>
- Date: Fri, 20 Jun 2008 13:19:52 +0200
>> Rewriting the HTTP spec is not feasible and I'm not even convinced its a >> good idea. HTTP has always been request/response so it would make a lot >> more >> sense to simply use a new protocol then confuse millions of >> developers/administrators who thought they understood HTTP. > As pointed out by others HTTP can preform asynchronously and persistently > under certain circumstances (ie TLS handshake). Microsoft describe the > process here: http://msdn.microsoft.com/en-us/library/aa380513(VS.85).aspx I think this will be a far better solution than opening a second communication channel to the server (ref my other posts). > Currently CGI has the web server offload a bunch of environment variables > that the CGI script decodes. What's missing then is a way to pass the whole > socket off to the script. The Fast CGI protocol is closer to the mark. > Wikipedia says: "Environment information and page requests are sent from the > web server to the process over a TCP connection (for remote processes) or > Unix domain sockets (for local processes). Responses are returned from the > process to the web server over the same connection. The connection may be > closed at the end of a response, but the web server and the process are left > standing." Also, it is quite common with modules linked directly with the server (ISAPI and Apache modules). For CGI (not FastCGI) i assume some inter process communication would have to be defined - but I do not think this forum should think about that, it is inevitable that changes must be done somewhere on the server side if we want to achieve two way communication with the same script that initially generated the page. > So Fast CGI achieves all the main goals for WebSocket (proxy support, > virtual hosting, ssl support) using existing access control rules and > configs that ISPs are familiar with. The only thing that is not supported is > persistent bi-directional communication (at least I have found nothing to > indicate this). However based on the description above the only limiting > factor seems to be an assumption that all links in the chain close the > connection after the initial server response (making it bi-directional but > not persistent). It also isn't strictly asynchronous since the client and > server apparently cannot send/receive simultaneously. I do not believe that proxies enforce rules regarding who is able to send data over the channel, therefore I do not believe asynchronous communications will be a problem - even though nothing indicates it being done today. > I propose a new protocol called Asynchrous CGI that extends Fast CGI to > support asynchonous and persistent channels rather than the creation of an > entirely new WebSockets protocol from scratch. The name implies that this is a server side protocol, used between the web server and the web application - so I believe that the name should be something like WebSocket still. Server side, there are multiple alternative interfaces to CGI (ISAPI, NSAPI, Apache modules etc). I believe support for WebSocket should be an issue between the web browser and the web server, and we should not consider what happens on the server side. A new working group could start working on extending FastCGI, but server side support will quickly be developed by those server providers wanting to support it. I assume a new working group will have to be created for extending FastCGI. >> If we put the access control in anything but the protocol it means that we >> are relying on an external service for security, so it would have to be >> something that is completely locked down. I don't really see what the >> mechanism would be. Can you propose a method for doing this so as to allow >> raw tcp connections without security complications? > I don't understand your point. Existing services use firewalls, > authentication, host-allow, etc as appropriate. The only new issue > TCPConnection or WebConnection introduce is the concept of an > "non-user-initiated connection". In other words a remote untrusted server > causing the local machine to make a connection without an explicit user > action (such as checking mail in Outlook). I believe the proposed DNS > extension combined with some form of explicit user-initiated priviledge > elevation reduce the two main threats: DDOS and browser-based brute-force > attacks. Michael informed me about a problem with my DNS extension proposal, and that is that anybody can register a domain called "evil.com" and add the required TXT records and point the domain to the target server. I think the DNS extension can still be used, but it has to involve reverse lookup on the target IP-address. The reverse lookup returns a host name, which must be queried for TXT-records that specify hosts that can have scripts allowed to connect. Does anybody have any views on this? >> What do you mean by brute-force access, and how could the proposed >> protocol *snip* > I already have already provided two examples in previous posts but to *snip* This is not an issue, regardless of protocol - if the Reverse DNS suggestion (above) is used for security - right? -- Best regards / Med vennlig hilsen Frode B?rli Seria.no Mobile: +47 406 16 637 Company: +47 216 90 000 Fax: +47 216 91 000 Think about the environment. Do not print this e-mail unless you really need to. Tenk milj?. Ikke skriv ut denne e-posten dersom det ikke er n?dvendig.
Received on Friday, 20 June 2008 04:19:52 UTC