[whatwg] TCPConnection feedback

>
>
> I think a major problem with raw TCP connections is that they would be
> the nightmare of every administrator. If web pages could use every
> sort of homebrew protocol on all possible ports, how could you still
> sensibly configure a firewall without the danger of accidentally
> disabling mary sue grandmother's web application?
>   

This already happens. Just yesterday we (an ISP) had a company unable to 
access webmail on port 81 due to an overzealous firewall administrator. 
But how is a web server on port 81 more unsafe than one on 80? It isn't 
the port that matters, it's the applications that may (or may not) be 
using them that need to be controlled. Port-based blocking of whole 
networks is a fairly naive approach today. Consider that the main reason 
for these "nazi firewalls" is two-fold:
1.) to prevent unauthorised/unproductive activities (at schools, 
libraries or workplaces); and
2.) to prevent viruses connecting out.

Port-blocking to resolve these things doesn't work anymore since:
1.) even without plugins a "Web 2.0" browser provides any number of 
games, chat sites and other 'time-wasters'; and
2.) free (or compromised) web hosting can provide viruses with update 
and control mechanisms without creating suspicion by using uncommon 
ports; and
3.) proxies exist (commercial and free) to tunnel any type of traffic 
over port 80.

On the other hand port control interferes with legitimate services (like 
running multiple web servers on a single IP). So what I'm saying here is 
that network admins can do what they want but calling the policy of 
blocking non-standard ports "sensible" and then basing standards on it 
is another thing. It's pretty obvious that port-based firewalling will 
be obsoleted by protocol sniffing and IP/DNS black/whitelists sooner 
rather than later.

Your argument misses the point anyway. Using your browser as an IRC 
client is no different to downloading mIRC or using a web-based chat 
site. The genie of running "arbitrary services" from a web client 
escaped the bottle years ago with the introduction of javascript and 
plugins. We are looking at "browser as a desktop" rather than "browser 
as a reader" and I don't think that's something that will ever be 
reversed. Since we're on the threshold of the "Web Applications" age, 
and this is the Web Applications Working Group we should be doing 
everything we can to enable those applications while maintaining 
security. Disarming the browser is a valid goal ONLY once we've 
exhausted the possibility of making it safe.

> Also keep in mind the issue list Ian brought up in the other mail.
> Things like URI based adressing and virtual hosting would not be
> possible with raw TCP. That would make this feature a lot less useable
> for authors that do not have full access over their server, like in
> shared hosting situations, for example.
>   
I fail to see how virtual hosting will work for this anyway. I mean 
we're not talking about Apache/IIS here, we're talking about custom 
applications, scripts or devices - possibly implemented in firmware or 
"a few lines of perl". Adding vhost control to the protocol is just 
silly since the webserver won't ever see the request and the customer 
application should be able to use any method it likes to differentiate 
its services. Even URI addressing is silly since again the application 
may have no concept of "paths" or "queries". It is simply a service 
running on a port. The only valid use case for all this added complexity 
is proxying but nobody has tested yet whether proxies will handle this 
(short of enabling encryption, and even that is untested).

I'm thinking here that this proposal is basically rewriting the CGI 
protocol (web server handing off managed request to custom scripts) with 
the ONLY difference being the asynchronous nature of the request. 
Perhaps more consideration might be given to how the CGI/HTTP protocols 
might be updated to allow async communication.

Having said that I still see a very strong use case for low-level 
client-side TCP and UDP. There are ways to manage the security risks 
that require further investigation. Even if it must be kept same-domain 
that is better than creating a new protocol that won't work with 
existing services. Even if that sounds like a feature - it isn't. There 
are better ways to handle access-control for non-WebConnection devices 
than sending garbage to the port.

>   
>> > [If a] protocol is decided on, and it is allowed to connect to any IP-address
>> > - then DDOS attacks can still be performed: If one million web
>> > browsers connect to any port on a single server, it does not matter
>> > which protocol the client tries to communicate with. The server will
>> > still have problems.
>>     
>
> Couldn't this already be done today, though? You can already today
> connect to an arbitrary server on an arbitrary port using forms,
> <img>, <script src=""> and all other references that cannot be
> cross-domain protected for backwards compatibillity reasons. The whole
> hotlinking issue is basically the result of that.
> How would WebSocket connections be more harmful than something like
>
> setInterval(function(){
>   var img = new Image();
>   img.src = "http://victim.example.com/" + generateLongRandomString();
> }, 1000);
>
> for example would?
It's more harmful because an img tag (to my knowledge) cannot be used to 
brute-force access, whereas a socket connection could. With the focus on 
DDOS it is important to remember that these sockets will enable full 
read/write access to arbitrary services whereas existing methods can 
only write once per connection and generally not do anything useful with 
the response.

Shannon
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20080619/879ca67f/attachment.htm>

Received on Wednesday, 18 June 2008 18:43:39 UTC