W3C home > Mailing lists > Public > public-webapps@w3.org > April to June 2008

Re: Opting in to cookies - proposal

From: Jonas Sicking <jonas@sicking.cc>
Date: Thu, 19 Jun 2008 15:41:54 -0700
Message-ID: <485AE0B2.9000300@sicking.cc>
To: Thomas Roessler <tlr@w3.org>, Jonas Sicking <jonas@sicking.cc>, Web Applications Working Group WG <public-webapps@w3.org>

Thomas Roessler wrote:
> On 2008-06-16 12:00:29 -0700, Jonas Sicking wrote:
> 
>>> As I said in [1], I think this is pointless.
> 
>>> - Requests without cookies can be sent by the attacker anyway (but
>>>   from a different IP address); there is little to no point in
>>>   having a separate mechanism to authorize these requests coming
>>>   from the client.
> 
>> We unfortunately do have to authorize requests even without sending
>> cookies. This is to protect content behind firewalls. This is
>> unfortunate but I don't see a way around this.
> 
> So, following your argument, why is the authorization mechanism
> that's specified good enough for sites that use IP addresses (or,
> more generally, firewalls) for authorization decisions, but not good
> enough for cookies?

I'm not sure there is such a thing as "good enough", but rather "better" 
or "worse". It's about reducing risk, not eliminating it.

So partially I think it's the best we can do. For data that's only 
protected by firewalls there is no way we can detect that it is private, 
and so there is no way we can employ extra protection.

The other reason is that I think it's less likely that intranet sites 
will be used in mashups at all, so it's less likely that they will use 
the Access-Control spec at all.

>>> - Any arguments that apply to the authorization mechanism as
>>>   specified now (e.g., that people don't understand what they
>>>   are doing, and will therefore write bad policies) will
>>>   likewise apply to an authorization mechanism that is specific
>>>   to requests with ambient authorization. (Wait, that's where
>>>   we started out with this!)
> 
>> Yes, but that should be a smaller set of people since the people
>> that only want to share public data won't have to worry. So the
>> result should be fewer exploitable sites.
> 
> eh?  People who want to share public data will have to worry about
> writing a policy, and any confusion that's caused by additional
> complexity is going to extend to them.

So we're talking about different type of complexity here it feels like. 
Yes, I agree that the spec gets more tricky with the additional syntax 
we need to add, but I think the complexity of ensuring that your site 
not leaking data goes down by more.

To put it another way. I think that the number of more sites that gets 
exploitable due to the added complexity of the spec is far smaller than 
the number of sites that gets "saved" due to not receiving cookies.

Additionally, we can try to improve the spec complexity by changing the 
syntax. One thing I was thinking of was syntax like

Access-Control: allow-with-credentials <www.foo.com>

rather than the separate header... Other suggestions welcome.

> As far as the public data use case is concerned, I'd also note that
> that is precisely the use case in which a proxying approach is least
> problematic, since there are no credentials that would be leaked.
> The value of a browser-based cross-domain approach mostly manifests
> itself for *private* data.

I think direct connection to the 3rd party server is very valuable even 
for public data. First of all it significantly lowers the cost of 
building a mashup. All you need to do is to create a few static HTML and 
JS files and put on your server, and then make sure that you have the 
bandwidth to deal with sending those static files.

If you have to proxy all the mashed up data you additionally have to set 
up CGI scripts that take all incoming requests and make proxied outgoing 
requests. This greatly increases the amount of resources required, both 
bandwidth wise, and server wise.

On top of that you get much higher latency since all data is sent over 
two HTTP connections rather than one.

> So, I'm not convinced that having a separate mechanism to opt in to
> cookies (and other credentials) is really a useful choice here.
> 
>>> So we're now having two levels of authorizations -- some things can
>>> be done from the PI, and some can only be done from headers?
> 
>> Yes. The PI pretty much only makes sense for static files anyway,
>> which usually contain public data.
> 
> The original use case (from the voice browser world) was
> specifically about private data that were accessed based on some
> kind of ambient authorization.

Couldn't they use headers then?

The usecase that makes sense for me for the PI is things like XBL 
bindings and XSLT stylesheets.

/ Jonas
Received on Thursday, 19 June 2008 22:42:00 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 18:49:26 GMT