W3C home > Mailing lists > Public > public-appformats@w3.org > January 2008

Re: ACTION-154: policy decision / enforcement points

From: Jon Ferraiolo <jferrai@us.ibm.com>
Date: Fri, 25 Jan 2008 10:07:12 -0800
To: Thomas Roessler <tlr@w3.org>
Cc: public-appformats@w3.org, public-appformats-request@w3.org
Message-ID: <OF4DE02988.3B5C1B0D-ON882573DB.00629870-882573DB.00638968@us.ibm.com>

Thomas,
Great job here!

My opinion is that there are indeed server-based approaches that are better
than the current client-side approach because of simplicity, flexibility,
and security characteristics.

Of the server-based approaches you outlines below, here are my opinions:

> - Discover whether the server knows of cross-site request
>   authorization mechanisms, through...
>
>   * OPTIONS, [3] or
>   * a metadata file at a well-known location (P3P-like)

This is one is OK, but OPTIONS requires server developers to learn how to
do something that's a bit obscure and it sounds like some developers might
have difficulty having permission to put a metadata file within the desired
"well-known location" (I am assuming the well-known location would be
something like /crossdomain.xml.)

> - Design cross-site requests so legacy servers won't do anything
>   interesting when they are hit with them.  Whatever information is
>   required by the target host is then sent along with the cross-site
>   requests.
>
>   Possibilities include:
>
>   * use a strange content-type for POST and for responses, and don't
>     include any "ambient" authentication information; JSONRequest
>     takes this approach; [2]
>   * use new HTTP methods (CSGET, CSPOST, ...)

My favorite of all is the "strange content-type" approach, but we would
need to make sure that existing legacy servers actually check content-type,
and if not, then we would need to not only have a "strange content type"
but also "strange content" (i.e., force the content to be something that
will fail against legacy servers).

I would expect that the new HTTP method approach would result in years of
argument.

> - Explicitly ask the server for authorization.  Tyler proposed a
>   model like this in [4], using a well-known location like design
>   pattern.  Using OPTIONS with a Referer-Root header is another
>   possibility to the same end.

Same comment as for the first bullet: This is one is OK, but OPTIONS
requires server developers to learn how to do something that's a bit
obscure and it sounds like some developers might have difficulty having
permission to put a metadata file within the desired "well-known location".

Jon

public-appformats-request@w3.org wrote on 01/25/2008 06:19:57 AM:

> Per ACTION-154, I'm supposed to elaborate on possible proposals for a
> "server-side" enforcement point. Much of what is in this message is
> based on earlier material from Mark Nottingham, Tyler Close, and Doug
> Crockford.
>
> (This is to get some more clarity on ISSUE-20, Client and Server
> model.)
>
> There are two basic cases that we are concerned with: GET and not
> GET.  In the case of GET, the goal is to control the information
> flow from the data source to a Web application running from another
> origin.  In the case of other requests, the goal is to put controls
> on the control flow from the web app to a server-side application
> running from another origin, and on the information flow back.
>
>
> For GET, we're assuming that whatever other technology is "hosting"
> the "access-control" mechanism imposes a same-origin-like
> restriction on the data flow, and we assume that this restriction is
> part of the design assumptions that existing Web applications make.
> (In fact, this restriction is a critical part of current defense
> techniques against XSRF.)
>
> For non-GET methods, we're assuming that whatever other technology
> is "hosting" the "access-control" mechanism imposses a same-origin
> like restriction on applications' ability to send non-GET requests
> over the network.  We assume that it's worthwhile to protect
> server-side applications against unexpected cross-origin requests of
> this kind.
>
> In other words, if a server doesn't know about new cross-origin
> authorization mechanisms, then its environment shouldn't be changed
> by whatever mechanism we propose.
>
> Here are some design sketches:
>
> - Discover whether the server knows of cross-site request
>   authorization mechanisms, through...
>
>   * OPTIONS, [3] or
>   * a metadata file at a well-known location (P3P-like)
>
>   If the server is found to support the mechanism, use GET and/or
>   POST with Referer-Root for cross-site requests, and let the server
>   figure out whether to serve data, as Mark had sketched in [1]. For
>   this scheme to work properly with HTTP caches, the server must set
>   an appropriate Vary header on responses to requests that can be
>   cached (GET), and the cache must know how to deal with it.
>
>   In this model, the policy is never shared with the client, and
>   remains a local affair on the server.
>
>   The model does require the server to have a local convention for
>   policy authoring and an engine to interpret these policies.
>
>   Using metadata stored in a well-known location will reduce the
>   per-request overhead.
>
> - Design cross-site requests so legacy servers won't do anything
>   interesting when they are hit with them.  Whatever information is
>   required by the target host is then sent along with the cross-site
>   requests.
>
>   Possibilities include:
>
>   * use a strange content-type for POST and for responses, and don't
>     include any "ambient" authentication information; JSONRequest
>     takes this approach; [2]
>   * use new HTTP methods (CSGET, CSPOST, ...)
>
>   For the server side, same as above.
>
>   In this model, no policy is shared with the client, and there is
>   no overhead in terms of discovering what the server is capable of.
>
> - Explicitly ask the server for authorization.  Tyler proposed a
>   model like this in [4], using a well-known location like design
>   pattern.  Using OPTIONS with a Referer-Root header is another
>   possibility to the same end.
>
>   Once more, the policy doesn't need to be shared with the client,
>   and the complexity is isolated to the server side.
>
>
> One point in common to almost all of these models is that there is
> some rudimentary enforcement going on on the client side: The client
> learns about the server's abilities or decisions, and will then
> either stick to its old same-origin policy, or not.
>
> In the "use new HTTP methods" model, that enforcement is replaced by
> the client sending a distinct kind of requests.
>
>
> The real distinction (and the decision that this group needs to make
> and document!) between these models and the one that is in the
> current spec is where the policy is *evaluated* - either, that
> happens on the client (and there needs to be an agreed policy
> specification, which is what this document started out being).  Or,
> it happens on the server, in which case policy authoring is a purely
> server-local affair.
>
>
> In this context, it's worth noting (Hixie pointed, e.g., in [5]),
> that it is possible to deploy the currently spec'ed technique in a
> way that mostly imitates the "server-side" model: just send "allow
> *" (and appropriate Vary headers), and leave the rest to the server.
>
>
> I'd suggest that, as we go forward with this issue, people start
> elaborating on the benefits (and downsides) of the various models,
> compared to what's currently in the spec, if possible in terms of
> the use cases and requirements that we have now.
>
> Also, if you think there are additional use cases and requirements
> that are missing, it's probably worth calling these out, explicitly.
>
>
> 1.
http://lists.w3.org/Archives/Public/public-appformats/2008Jan/0118.html
> 2. http://www.json.org/JSONRequest.html
> 3. http://www.w3.
>
org/mid/C7B67062D31B9E459128006BAAD0DC3D10BA65C7@G6W0269.americas.hpqcorp.net

> 4. http://www.w3.
>
org/mid/C7B67062D31B9E459128006BAAD0DC3D10C4E3B3@G6W0269.americas.hpqcorp.net

> 5.
http://lists.w3.org/Archives/Public/public-appformats/2008Jan/0186.html
>
> --
> Thomas Roessler, W3C   <tlr@w3.org>
> Per ACTION-154, I'm supposed to elaborate on possible proposals for
> a "server-side" enforcement point. Much of what is in this message
> is based on earlier material from Mark Nottingham, Tyler Close, and
> Doug Crockford.
>
> I'd like to start out with the general assumptions that I'll make:
>
> 1. There is some surrounding spec that currently implements a same
> origin policy with the properties that (a) requests different from
> GET can only be sent to same-origin URIs, and (b) prevents whatever
> capabilities this specification makes available from acting upon
> information that might be retrieved from non-same-origin URIs
> (through GET, presumably).
>
> 2. There are Web applications out there that rely in one way or the
> other on the properties of that same-origin policy.  We consider
> these Web applications to be worth our protection, i.e., existing
> Web Applications should not be exposed to POST (and other non-GET)
> requests from a different origin, unless there is an explicit opt-in
> to that. Also, information retrieved from GET should only be
> communicated to applications from other origins if there's an explicit
opt-in.
>
> In other words, we assume that the *current* environment is one in
> which the server side does not know anything about cross-site
> requests, and in which the client side prevents these requests from
> happening.  Policy decisions are fully made on the client side, even
> though that policy is very simple ("no").
>
> If we want to authorize certain requests (and assume that there is a
> design principle to move as much as possible over to the service
> side), then either of two things needs to happen in order to fulfill
> requirement 2:
>
> 1. Cross-site requests are sent in a way that simply causes legacy
> applications to reject the request.  Design choices include strange
> content types (JSONRequest) or different HTTP methods.
>
> 2. The client switches
>
> --
> Thomas Roessler, W3C   <tlr@w3.org>
Received on Friday, 25 January 2008 18:09:15 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:56:21 UTC