W3C home > Mailing lists > Public > public-webapps@w3.org > April to June 2009

Re: [cors] Review

From: Mark Nottingham <mnot@mnot.net>
Date: Sun, 14 Jun 2009 11:58:31 +1000
Cc: public-webapps@w3.org
Message-Id: <88D4CBAF-35B2-4998-8B1D-A4405C899DB4@mnot.net>
To: Anne van Kesteren <annevk@opera.com>

On 13/06/2009, at 11:08 PM, Anne van Kesteren wrote:

> Hey Mark,
> Thanks a lot for you review, very much appreciated. It's somewhat  
> unfortunate that you raise these substantive issues at such a late  
> stage given that we have shipping implementations at this point. As  
> such I'm not clear whether we can still resolve those in a  
> satisfactory way.

As I said, I raised have raised substantive issues before:
and don't believe they were formally addressed (note that I'm using  
Process terminology here). That experience didn't lead me to believe  
that it was worth spending the time to track the specification closely.

> On Fri, 29 May 2009 09:27:46 +0200, Mark Nottingham <mnot@mnot.net>  
> wrote:
>> *** Substantial issues
>> * POST as a "simple" method - POST is listed as a simple method  
>> (i.e.,
>> one not requiring pre-flight) because there are already security  
>> issues
>> that allow an HTML browser to send cross-site POST requests. However,
>> other contexts of use may not have this problem, and future  
>> developments
>> may close that hole. Requiring a pre-flight for POST because it is
>> unsafe is the right thing to do for both of these reasons.
> We decided to do it this way for compatibility with XDomainRequest.  
> POST has further restrictions applied to it though. What exactly do  
> you mean with "other contexts" by the way?


> * Field-name verbosity - The defined header field-names are quite  
> long,
>> and contain misleading (this isn't really access-control  
>> information, at
>> least on requests) and redundant (e.g., "request-method"). Suggest
>> using:  CORS-Allow-Origin, CORS-Maxage, CORS-Allow-Cred,
>> CORS-Allow-Methods, CORS-Allow-Headers, CORS-Method, CORS-Headers.
> It seems unlikely we can change this at this point with several  
> implementations shipping already. Quite unfortunate as your names do  
> seem a lot better.

Are those implementations widely used? Can't they support both for a  
while? I can't imagine that may resources in the wild actually use  
this mechanism yet, since support for it is presumably still only in a  
few UAs.

> * Unnecessary request headers - removing the Access-Control-Request-
>> Method and Access-Control-Request-Headers fields would substantially
>> simplify the design; it would necessitate that the server list all
>> methods and headers that are to be sent cross-origin in the preflight
>> response, but this is not an onerous requirement.
> Content providers wanted the flexbility of not having to list every  
> header in advance. Both so debugging headers and such would not have  
> to be exposed and to reduce the payload.

Which content providers? How much extra payload do you really expect  
this to be?

> * Preflight cache - As specified, the preflight cache is very complex
>> and hard to understand. Removing the request headers above will help,
>> and would enable a switch from OPTIONS to HEAD for pre-flights, again
>> simplifying the design and allowing the use of a standard HTTP cache,
>> instead of a purpose-built one. Failing that, the material related to
>> the cache desperately needs a rewrite for clarity.
> Could you elaborate on what is not clear? I'm not really sure how to  
> make it better.

Without producing a complete proposal, no.

> * Request header deletion - The model of giving the server control  
> over
>> any and all additional request headers tightly couples the origin  
>> to the
>> format of requests. It may be desirable in some cases to add local
>> request headers (e.g., targeted at firewalls or proxies)
>> programmatically, but this would not be possible using this design
>> without coordination with the origin. Instead of a whitelist of  
>> "simple"
>> headers, why not have a blacklist of headers that have to be  
>> explicitly
>> allowed by the server (e.g., Cookie, Authorization)?
> Because blacklists are inherently dangerous?
>> * Response header deletion - Again, deleting all but a pre-defined  
>> list
>> of response headers is too draconian, and seriously limits  
>> extensibility
>> on the Web. Again, why not just a blacklist? What's the attack vector
>> here?
> Implementors did not want a blacklist. The attack vector is the  
> server inadvertently exposing headers it did not want to.

Has this been discussed in depth before? If so, do you have a ref? I  
think it deserves some serious discussion if not.

> * Chattiness - The protocol set out here requires a pre-flight request
>> every time a new URL is used; this will force Web sites to tunnel
>> requests for different resources over the same URL for performance/
>> efficiency reasons, and as such is not in tune with the Web
>> architecture. A much more scalable approach would be to define a  
>> "map"
>> of the Web site/origin to define what cross-site requests are allowed
>> where (in the style of robots.txt et al; see also the work being  
>> done on
>> host-meta, XRDS and similar). I made this comment on an older draft a
>> long time ago, and have still not received a satisfactory response.
> See crossdomain.xml. It is a security nightmare. Especially when a  
> single origin is being used for several APIs.

Waving your hands and saying "security" is not a substantial response.

> * Procedural definition - This specification is defined as a set of
>> procedural instructions for implementations. The advice that "User
>> agents MAY employ any algorithm to implement this specification, so  
>> long
>> as the end result is indistinguishable from the result that would be
>> obtained by the specification's algorithms" is at best fallacious
>> (besides leaving out servers); it sidesteps the question of what's
>> meaningful in determining what is "indistinguishable."  
>> Specifications in
>> this style unnecessarily constrain implementations, are more  
>> difficult
>> to understand (e.g., it's difficult to understand the operation of a
>> protocol mechanism such as a header without stepping through every
>> single algorithm in the spec), and often preclude their reuse for
>> unforeseen purposes.
> Added servers. It's not clear to me how to rewrite the specification  
> in a way that does not leave gaps. If you can find another editor  
> who can do that for us that'd be ok I suppose.

I don't think saying (roughly) "that's the best we can do with limited  
resources" is a substantial response either, but I have a feeling  
it'll be accepted nevertheless :-/

>> * Introduction - "This specification is a building block for other
>> specifications, so-called hosting specifications."  This is an
>> unfortunate term. How about "Cross-Origin Application Specification",
>> "API Specification" or similar? (here and elsewhere)
> Can you explain why it is an unfortunate term? (Things like "host  
> language" seem to be used elsewhere within the W3C. I thought this  
> would be a fine extension.)

Right, I'm just not sure that translates well to talking about  
referring to this from another specification. "host" is already  
overused anyway.

>> * Conformance Criteria - "A conformant server is one that..." --> "A
>> conformant resource is one that..."
> I haven't done this yet. Does it still make sense to talk about a  
> server processing model if we do this?

Probably "resource processing model..."

>> * Syntax - all of these headers need to be registered with IANA; see
>> RFC3864. Note that publication as a W3C Rec is enough, but the
>> registration template needs to be in the document.
> They are provisionally registered already. Where is it stated that  
> the template needs to be inside the document?

Sorry, that isn't a formal requirement -- just convention.

>> * Generic Cross-Origin Request Algorithms - for clarity, can this be
>> split up into separate subsections?
> I added spacing instead. Does this work?

My personal preference would be subsections, to make sure they're  

Mark Nottingham     http://www.mnot.net/
Received on Sunday, 14 June 2009 01:59:09 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:12:54 UTC