- From: SULLIVAN, BRYAN L <bs3131@att.com>
- Date: Fri, 4 May 2012 18:22:12 +0000
- To: "ifette@google.com" <ifette@google.com>, Rigo Wenning <rigo@w3.org>
- CC: "public-tracking@w3.org" <public-tracking@w3.org>, Nicholas Doty <npdoty@w3.org>
- Message-ID: <59A39E87EA9F964A836299497B686C350FEA55C4@WABOTH9MSGUSR8D.ITServices.sbc.com>
I agree that a file at a well-known location should not be assumed to be in the critical path of access to a Website, as something that must be retrieved and parsed prior to interacting with the site. I view this proposal only as I believe it was originally suggested: a way for users, that want to know more about the policy of a site, to access that information and also any personalized info related to the sites awareness of user choices and its compliance to them. I don’t know if we even need specific schema for the file (there has been pushback on that in the group) but that could be future work. Thanks, Bryan Sullivan From: Ian Fette (イアンフェッティ) [mailto:ifette@google.com] Sent: Friday, May 04, 2012 11:13 AM To: Rigo Wenning Cc: public-tracking@w3.org; Nicholas Doty Subject: Re: ACTION-172: Write up more detailed list of use cases for origin/origin exceptions I think a policy is stored at a well known location for those who care to fetch it to read up on a site's policy. I do not imagine that it's something that a browser should ever fetch as a (blocking) part of browsing to a website. If we're imaging that a browser fetches a policy file on every navigation, that would be a failure IMO... As for the " Ok, this is slower, but this is unknown territory for both." - it is not slower in the case where only * exists. You either have a site wide exception or you don't, this is trivial for the browser to message to the site. I maintain that your suggestion is quite a bit more complex. Web-wide exception handling is already a breeze, that's not the problem. The problem is complex UI around which third parties are granted exceptions tied to a particular site, and telling a site the status of the exceptions for third parties on that site. -Ian On Fri, May 4, 2012 at 10:51 AM, Rigo Wenning <rigo@w3.org<mailto:rigo@w3.org>> wrote: On Friday 04 May 2012 09:53:10 Ian Fette wrote: > In your example, it doesn't mean *, it means P1, P2, P3 which has all the > drawbacks, e.g. I (as a browser) can't tell a website if it has exceptions > for sites it cares for before it starts delivering content without using > polling and introducing 1xRTT. I have no idea that P1, P2, P3 actually > correspond to * because the next time the user hits the site, as you say, > there could be a new 4th party P4. So, the only thing I can tell the site > is "There exist some third parties on your site that have exceptions" > which is not entirely useless for the site, but probably doesn't suffice > for what I perceive to be the common case. You do have exactly the same issue for "same_party". Why aren't you complaining there? WKL always means +1xRTT. That's why I want the communication in the header. Vincent already suggested to have those third parties in the WKL file Your issue is a protocol issue. If a site wants to block unless tracking is allowed, it will send a corresponding response header on the GET request with DNT;1. Ok, this is slower, but this is unknown territory for both. Once it is known territory things can be very fast. If we do not have that response header, we should create one. I think in another email you rightly raise the question of where user preferences are stored. I don't want to stand in the way of innovative services, but I would only use a DNT tool that stores its preferences locally (in the browser). You know that the site has added P4 when you've parsed the page and you have no preferences for P4 in your store. If "*" means "I as a browser do not care at all, because its * anyway" I see that it can become simple. Either you open the floodgates or you don't. But does this also override web-wide exceptions? My suggestion is just a bit more complex but makes web-wide exception handling a breeze: You just monitor where the requests are going and match with your preferences database. If it goes to an unknown site, you need to take action. This allows for very quick fetching of known things and less quick fetching of unknown things. And this is exactly how you would behave in real life in some unknown terrain. Prudent and slower in still unknown areas and fast in already explored areas. Note that I'm not a programmer! Rigo
Received on Friday, 4 May 2012 18:23:25 UTC