Re: Extension methods & XMLHttpRequest

On Jun 11, 2006, at 1:06 PM, Jamie Lokier wrote:
> Roy T. Fielding wrote:
>> What browsers need to do is obey the specs.  TRACE is not a problem.
>> If the browser sends already-non-secure cookies on a TRACE request,
>> then the response is going to contain those cookies.
>
> Eh?  What makes a cookie already-non-secure, as opposed to secure  
> cookies?

All cookies are non-secure.  Using them for security purposes (like
access control) is just begging for security holes.

> The reasons given for blocking TRACE are to protect against a certain
> type of cross-site scripting attack.  This is the example:
>
> Site A is a wiki/blog which does not adequately sanitise content
> posted to it.  There is no shortage of these.
>
> A user logs on to site A, where a script maliciously posted loads into
> the user's browser and starts doing things it shouldn't.  One of those
> things is to send a TRACE to site A, look at the cookies returned
> (cookies used to login to the site), and post those cookies to another
> page on site A.
>
> The hacker who posted the script can then retrieve the user's cookies,
> and use them to impersonate the user at other times, if the cookies
> are used as login credentials, which is of course common.

Yes, the world is full of people who don't care about security.

> Your suggestion seems to be that the browser should allow TRACE but
> not post cookies with the TRACE request.  There are two approaches to
> that:
>
>     1. The browser does not post cookies with _any_ XMLHttpRequest.
>        That's very unhelpful - they are useful.

Sure, they are useful for poorly designed sites that expect to receive
GET and POST requests.  They might even make sense for PUT and DELETE
and a few other methods.  But arbitrary methods that the browser has
no clue about the semantics?  Why would any client software want to
send supposed "security credentials" like cookie on a method without
knowing the semantics?

>     2. The browser has a special rule for TRACE.

Yes.

> The third approach, which I suspect you have in mind, is that site A
> should properly sanitise whatever is posted to it.  If site A doesn't
> do that well enough, it's site A's own security hole.

Yes, but that wasn't my point.  The point is that TRACE is a valuable
method and must be implemented by servers to have value.  The fact
that one browser sends cookie information on arbitrary requests is
not the fault of TRACE -- any new method might have the effect of
storing the headers on a public site, or perhaps broadcasting them
to an open network.  Fix the browser.

>> CONNECT is a protocol switch.  The only real risks for CONNECT are
>> for proxies that are tricked into connecting to the reserved ports
>> of older protocols.
>
> No, there is another risk which is equally serious.
>
> Example: Site A is a dodgy site which someone visits (let's say by
> accident).  Site B is the user's bank.
>
> The user visits site A, which sends a page containing a script which
> uses XMLHttpRequest to send a CONNECT request.  The request is
> processed by a proxy between the user and site A, and the proxy
> processes it by connecting to port 443 on site A, successfully.
>
> The user (shocked at the pictures) quickly closes their window, then
> later connects to site B.
>
> The browser then sends requests to site B *using the same connection
> to their proxy*.  Persistent HTTP does this.

No it doesn't.  The CONNECT request to a proxy terminates the HTTP
part of the connection -- it is no longer a proxy connection but a
tunnel.  If the browser sends later HTTP requests to that tunnel
then it is sending them *knowingly* to site A and the browser is
obviously broken.  Easy to fix, no need to change the protocol.

> This assumes the browser did not recognise XMLHttpRequest + CONNECT as
> different from other request methods.

That isn't possible -- the syntax of the CONNECT request is different
from that of other requests.  Any client that didn't understand it
would send an invalid message that the proxy will reject.

> This means the browser sends subsequent requests for site B to site A.
> Site A can proxy them (silently) to site B, and thus accomplishes a
> stealth phishing attack.  Nothing shows in the URL bar; only sniffing
> the network would reveal the requests are hijacked.
>
> There's a couple of obvious ways to prevent this:
>
>     1. Don't allow CONNECT with XMLHttpRequest.
>
> or
>
>     2. Ensure the browser treats CONNECT differently than other
>        methods with XMLHttpRequest: by marking the connection as
>        non-reusable.

Yes, that is how it was designed by Ari.

>> All HTTP methods fall into safe/unsafe categories.  It is absolutely
>> necessary that the browser distinguish between safe requests
>> (hyperlinks) and unsafe requests (a la buttons).  That is a browser
>> GUI issue and, for scripting, requires a user ack prior to sending  
>> the
>> unsafe request.  By default, all unknown methods should be considered
>> unsafe until the user configures otherwise.  There is no need for the
>> *protocols* to define whitelists when the developers can do so on
>> their own, based on their own applications, and according to their
>> own user needs.
>
> I agree with you, that the browser GUI should be the place where
> ultimately whitelist/blacklist/whatever policy is configured.  This is
> no different than, for example, whether a script is allowed to access
> local files, or do cross-domain requests.
>
> However I think it makes good sense for the XMLHttpRequest
> recommendation to _recommend_ a common policy.

A browser represents its user on the Internet.  Any browser that sends
a method that it does not understand, without explicit consent from
the user, is entirely broken.  That is what the recommendation should
say.  Methods are designed to be extensible and modular, not safe.
A browser can implement new methods by allowing the user to install
extensions that understand those methods.  A browser should never
execute arbitrary, unapproved methods, for the same reasons that a
browser should not execute arbitrary code.

> In those terms, a whitelist makes sense.  So something like this:
>
>     Implementations which conform to this recommendation support at
>     minimum GET and POST requests to the same domain without user
>     intervention.  Cookies, Accept etc. will be sent with these
>     requests as with ordinary page requests.  (Script authors can
>     depend on this).

No, POST requests are inherently unsafe.  They cannot be made without
user intervention -- it violates all the other Web specs.

>     Implementations MAY allow other request methods, MAY allow
>     cross-domain requests, MAY require user intervention to enable any
>     of these things, and MAY apply implementation-defined restrictions
>     on what combinations are allowed.  However, implementations do not
>     have to allow any of these things to conform to this
>     recommendation.

That is a waste of space.  The spec should say why methods exist and
that only known safe methods can be used without user intervention.
(Intervention includes such things a specific configuration prior
to running the application, not just pop-up boxes.)
That is what HTTP and HTML already requires.  What it should not do
is list a small set of methods and say implementations MUST (NOT)
implement them -- that is none of your business and simply sets up
the implementers to be fooled by unexpected extensions.

....Roy

Received on Monday, 12 June 2006 05:52:17 UTC