[whatwg] TCPConnection feedback

On Fri, Jun 20, 2008 at 1:19 PM, Frode B?rli <frode at seria.no> wrote:
> I think this will be a far better solution than opening a second
> communication channel to the server (ref my other posts).

Would that be so much of  a problem though? Your web application opens
multiple connections today already, so you can not be sure that
requests that belong to the same session all come on the same HTTP
connection. If you have a proxy inbetween, I'd imagine you can't even
be sure that requests on the same connection belong to the same user.

The HTTP handshake would most likely send the cookies though, so it
shouldn't be that difficult to associate your persistent connection
with an existing session.
If you don't want to rely on cookies, you could always make your own
session tracking mechanism on top of this protocol using JS.

I think if persistent connections are supported broadly, this could
also be the start of a new design pattern, where you use HTTP only to
serve static resources and do all dynamic stuff through a single
persistent connection. Such applications wouldn't need any session
information outside the connection at all.
Of course it would be up to the web author how he designs his
application. It's just a thought.

However, if it's still desired, maybe we could add it as an additional
option in HTML instead of HTTP?
Idea: Add an additional HTML element. If it is present, the browser
will not close the connection after it downloaded the document, but
instead send an OPTIONS <uri> Upgrade: ... request and present and
give the page's scripts access to a default WebSocket object that
represents this connection.

On Fri, Jun 20, 2008 at 10:26 AM, Shannon <shannon at arc.net.au> wrote:
> I don't understand your point. Existing services use firewalls,
> authentication, host-allow, etc as appropriate. The only new issue
> TCPConnection or WebConnection introduce is the concept of an
> "non-user-initiated connection". In other words a remote untrusted server
> causing the local machine to make a connection without an explicit user
> action (such as checking mail in Outlook). I believe the proposed DNS
> extension combined with some form of explicit user-initiated priviledge
> elevation reduce the two main threats: DDOS and browser-based brute-force
> attacks.

On Fri, Jun 20, 2008 at 1:52 PM, Frode B?rli <frode at seria.no> wrote:
> I have a proposal for a cross domain security framework that i think
> should be implemented in browsers, java applets, flash applets and
> more.

If we use any kind of verification that happens outside the
connections, we should at least a hint inside the connection, which
host the browser wants to connect though. I think raw TCP connections
would cause major security holes with shared hosts, even if
cross-domain connections need a DNS record.
Consider the following scenario:

Bob and Eve have bought space on a run-of-the-mill XAMPP web hoster.
They have different domains but happen to be on the same IP. Now Eve
wants do bruteforce Bob's password-protected web application. So she
adds a script to her relatively popular site that does the following:

1) Open a TCP connection to her own domain on port 80. As far as the
browser is concerned, both origin and IP adress match the site one's,
so no cross domain checks are performed.

2) Forge HTTP requests on that connection with Bob's site in the Host
header and bruteforce the password. The Browser can't do anything
about this, because it treats the TCP connection as opaque. The
firewall just sees a legitimate HTTP request on a legitimate port.

If Bob checks his logs, he will see requests from IPs all oer the
world. Because Eve has control about the Referrer and
Access-Control-Origin headers, there is no trace that leads back to
her. She might even have send the headers with forged values to shift
suspicion to another site.

Note that the web hoster doesn't need to support WebSocket server side
in any way for this to work.
We could strengthen the security requirements, so that even
same-domain requests need permission. However, then we had about the
same hole as soon as the web host updates its services and gives Eve
permission to access "her own" site.

>> Access-Control: allow <*> exclude <evilsite.example.com>
>> Access-Control-Max-Age: 86400
>> Access-Control-Policy-Path: /
>>
>
> I think the idea is good, but I do not like the exact implementation.
> I think the server should be able to see which script is initiating
> the connection (header sent from the client), and then the server can
> respond with a HTTP 401 Access Denied. No need to specify anything
> more. No need to specify which hosts that are allowed, since the
> script can decide that on a number of parameters (like IP-address
> etc).

(Sorry for the late reply, I had a few technical problems)

I think the Access-Control-Origin header is exactly what you are
looking for: <http://www.w3.org/TR/access-control/#access-control-origin0>
This would be sent by the client as a part of the "Access Control checks".

The idea of the client-side verification was that the server would
only need as little processing per request as possible. In case of a
DDOS, it could more or less "blindly" send back the headers without
any need to first look up the client's origin in a black/whitelist. In
extreme cases, it wouldn't need to parse the client's request at all.

Because both, Access-Control-Origin and Access-Control: allow... would
be used, web authors could choose between both strategies and decide
which one suits best for their setup.

Received on Friday, 20 June 2008 05:51:38 UTC