Re: Improving CORS security

Thanks for all the feedback!


> - Enabling static trust of multiple origins by supporting a
>> space-separated list of origins
>
>
> ​I would support syntax for this. Having to programmatically generate the
> correct header has always seemed error prone and leads to blind echoing
> which is less secure than just using "*".​


Great! I don't have any strong views on what the syntax should be (spaces
vs commas), but I think this one suggestion is the clearest win as it
strongly pushes people towards using a hard-coded whitelist.


> - Enabling static trust of all subdomains by supporting the use of partial
>> wildcards like https://*.example.com
>
>
​I have more trepidation here that people will leave themselves too wide
> open, but I cautiously support this as a practical necessity given the
> first. Would such sites allow credentials, or would they be forced to be
> credential-less like '*' ?


I think this would need to support credentials for anyone to use it. I
agree that trusting all subdomains isn't really a great idea, but it's a
common use case and if you enforced a rule like '* must be followed by .'
you could help out the many sites making Zomato's mistake of trusting
literally everything that ends in zomato.com, including notzomato.com


Websites accessed over HTTPS can use CORS to grant credentialed access to
>> HTTP origins, which partially nullifies their use of HTTPS. Perhaps
>> browsers' mixed content protection should block such requests, or at least
>> disable allow-credentials for HTTP->HTTPS requests.
>
>
I disagree. From the document and user point of view it's not at all mixed
> content.​

Maybe mixed content was a poor choice of terminology. I think this
suggestion might have been misunderstood slightly. I'm suggesting that an
application that specifies ACAO: true and ACAO: <some HTTP origin> should
have the ACAC flag ignored. I don't see how this will making upgrading
sites to HTTPS harder, since as Anne said the standard approach is to
upgrade CDNs first and the application afterward, and it's only
applications that care about allowing credentials.


> If the HTTPS server doesn't want to give up its data to HTTP origins it
> can quite simply not respond with the CORS headers that enable it.

Well obviously, but anyone who wants to trust all their subdomains has to
do dynamic generation based on the Origin header, and virtually none of
them bother to check the protocol being supplied, Google included. You
could just as easily say "HTTPS sites that don't want to wreck their
security shouldn't import scripts over HTTP" but browsers are happy to step
in and block that.

At present, the browser UI on HTTPS pages that allow credentialed access
from HTTP origins is grossly misleading. The browser indicates that the
page is secured against network attackers when they can actually trivially
gain access to everything the user can see.

Cheers,

James Kettle

On 10 May 2017 at 18:30, Daniel Veditz <dveditz@mozilla.com> wrote:

> On Tue, May 9, 2017 at 8:41 AM, James Kettle <james.kettle@portswigger.net
> > wrote:
>
>> - Enabling static trust of multiple origins by supporting a
>> space-separated list of origins
>>
>
> ​I would support syntax for this. Having to programmatically generate the
> correct header has always seemed error prone and leads to blind echoing
> which is less secure than just using "*".​
>
> - Enabling static trust of all subdomains by supporting the use of partial
>> wildcards like https://*.example.com
>>
>
> ​I have more trepidation here that people will leave themselves too wide
> open, but I cautiously support this as a practical necessity given the
> first. Would such sites allow credentials, or would they be forced to be
> credential-less like '*' ?
>
> Websites accessed over HTTPS can use CORS to grant credentialed access to
>> HTTP origins, which partially nullifies their use of HTTPS. Perhaps
>> browsers' mixed content protection should block such requests, or at least
>> disable allow-credentials for HTTP->HTTPS requests.
>>
>
> I disagree. From the document and user point of view it's not at all mixed
> content.​ If the HTTPS server doesn't want to give up its data to HTTP
> origins it can quite simply not respond with the CORS headers that enable
> it. The existing mixed-content blocking behavior leans hard on 3rd-party
> services to implement https because as such they can serve anyone, while
> with https they cannot serve to https documents.
>
> -
> ​Dan Veditz​
>
>

Received on Friday, 12 May 2017 17:21:58 UTC