Re: wording for the privacy section

On Wed, 12 Nov 2008, Alissa Cooper wrote:
> 
> The point of having privacy-protective defaults is that even when sites 
> use them in place of more granular preferences, they still provide more 
> information about the user's preferences than having no rules at all.

Defaults tell you _nothing_ about the user.

And your argument above is predicated on the sites using the defaults at 
all, which they won't, as I covered in my earlier e-mail. We have ample 
experience with this. Ignoring it is not productive.


> How does the existence of a default imply that users can't set their own 
> privacy settings?

They can, but they'll either be ignored, because sites will find that most 
users don't change them even when their preferences don't match the 
defaults, and so will offer site-specific preferences as part of the site 
itself [1]; or, they'll cause bugs on sites, because sites will 
unintentionaly rely on the original defaults, and when the values change 
their code will act using untested codepaths [2].


[1] example: sites offer themes and allow users to pick default colours 
and fonts, instead of using the user's selected preferences.

[2] example: change your colour preferences to white-on-black. Half the 
Web will turn all-white.


> > I'd have no problem with Apple sharing my location with Google, but I 
> > wouldn't want it sharing my location with the site run by a political 
> > candidate.
> 
> This is the point of having a ruleset reference, so you can express this 
> sort of granular preference.

With all due respect, if you seriously expect users to express these 
preferences explicitly, then you need to do more user testing of UI.

I assure you that most browser vendors will never provide that level of 
user interface.


> So if a user like you who is fine with disclosure to Google but not to a 
> politician has to rely only on a privacy policy that talks about 
> third-part disclosure in vague terms in order to decide whether to 
> disclose his location to the site, how will he do it?

The same way I decide whether to trust a site with my credit card details.


> > > rulesetReference is a URI.
> > 
> > How is this URL supposed to be picked? I mean, how would the user's 
> > preferences be uploaded somewhere that a script could access? How is 
> > the script supposed to access this URL? How will the data at this URL 
> > be protected from other users?
> 
> One example would be if my network provider allows me to set privacy 
> rules around my location information that it makes available on the 
> network.

So this API now requires browser vendors to work with ISPs world wide?

That's not really going to work. We have enough trouble getting ISPs to 
fix their DNS servers to not be trivially spoofable, let alone getting 
them to add new features like this that most of their users will never 
even know exist (nor care about).


> > > Here's a very simple example of what might exist at a 
> > > rulesetReference URI:
> > > 
> > > <?xml version="1.0" encoding="UTF-8"?>
> > >  <ruleset xmlns="urn:ietf:params:xml:ns:common-policy">
> > >      <rule id="f3g44r1">
> > > 	           <conditions>		               <identity>
> > > 	                   <many>
> > > 	                       <except domain="adserver1.com"/>
> > > 	                       <except domain="adserver2.com"/>
> > > 	                   </many>
> > > 	               </identity>
> > > 	           	   <validity>
> > > 	                   <from>2009-08-15T10:20:00.000-05:00</from>
> > > 	                   <until>2009-09-15T10:20:00.000-05:00</until>
> > > 	               </validity>
> > > 	            </conditions>
> > > 	       <actions/>
> > > 	       <transformations/>
> > > 	     </rule>
> > >  </ruleset>
> > 
> > With all due respect, authors aren't going to make head or tails of 
> > this data. In fact, having Web authors _attempt_ to use data in a 
> > format this complicated will likely turn them off dealing privacy 
> > issues for years, leaving them with the impression that privacy is 
> > hard and complicated and not worth the effort.
> 
> On the one hand, you claim that the majority of developers out there 
> already do a great job of protecting privacy and they care a lot about 
> it.

Right. They know that they should protect their users' credit card 
details, etc.


> On the other hand, you claim that none of them will be willing to think 
> about following a simple set of user privacy preferences, and that the 
> mere thought of it will cause them to renounce their concern for privacy 
> altogether. Which is it?

Your definition of "simple" is at wide variance with mine.

My two positions here aren't mutually exclusive. Authors by and large care 
about their users' wishes, but that doesn't mean they will have the 
slightest interest in dealing with such ridiculously complicated XML 
documents as the above.

I stand by my comment that exposing authors to the above will actually 
*reduce* the level to which they care about privacy. If the goal is to 
make authors care about privacy, we should absolutely not expose them to 
this kind of heinous markup.


> Plus, if developers are looking for a middle ground, they can pretty 
> easily use the defaults.

How does this differ from ignoring this API altogether?

Surely having sites use the defaults when the user has set a preference 
that contradicts the defaults is just going to make users think that the 
feature is broken?

You can't have it both ways.

Either the feature helps the user, and the users and authors both use it, 
or the feature doesn't help the user, and they both ignore it. We can't 
help the user if only the user or only the author uses it.


> The script would dereference the rulesetReference, parse the privacy 
> rules there, and apply them just the same way the rules above are 
> applied. What the script code would actually look like largely depends 
> on what the rules say and the scope of the rules that the script is 
> willing to enforce.

Could you provide a working example of actual JavaScript (not pseudo-code) 
that would handle a typical range of rules that we could expect the user 
to set? This is, after all, what you are expecting authors to do.


> > On Nov 5, 2008, at 1:01 AM, Ian Hickson wrote:
> > > If the user allows retransmission of his location, the pizza place 
> > > sends it to the ad server.
> > 
> > With all due respect, I think this underestimates the power of greed. 
> > If browsers default to "false" for this value, which seems advisable 
> > if we were to have a default, then sites will just ignore the setting 
> > and send the user's location out anyway, possibly with a site-level 
> > opt-out.
> 
> See my next email to Angel for a discussion on why privacy rules are 
> helpful even when they are ignored.
> 
> As a side note, I think it would take a pretty gutsy site to acknowledge 
> its purposeful disregard of the user's preference by offering an 
> opt-out.

This is exactly what's going on now for cookies:

   http://www.doubleclick.com/privacy/opting_out.aspx

Why would it be different for geographic data?


> But if it did, the presence of the opt-out would be a perfect basis for 
> users to make the case that the site is not respecting users' explicit 
> preferences.

The vast majority of people don't seem to really care in the case of 
DoubleClick cookies, why would they suddenly care in the case of 
geographic data? Consider that with the ad cookie issue, we have had 
literally a decade of privacy-conscious users screaming about this. It's 
not like there's a big conspiracy to hide this information from users. Yet 
the vast majority of users still don't actually care.


> > Who are these non-malicious developers who would ignore privacy 
> > normally but would _not_ ignore it if we included this feature? I 
> > would be very surprised if there were any significant number of such 
> > people.
> 
> Let's pick two easy examples (although I think there are many). Facebook 
> is a company that has put a lot of work and thought into designing its 
> services in a privacy-protective way. And yet their first implementation 
> of the Beacon incurred such tremendous backlash that they eventually 
> made significant changes to its design so that it was more in line with 
> users' preferences and expectations. What if the purchase data used in 
> the Beacon system had originally been accompanied by a rule that 
> expressed "don't post this to my profile without asking" or something 
> similar? If this had been the case, I would venture to guess that 
> Facebook would not have trudged ahead with revealing purchase 
> information without first obtaining consent. Instead, they guessed or 
> assumed people's preferences and found out, quite publicly, how wrong 
> they were. The intent was not malicious.

I see no reason to believe that the suggestion of including Geopriv rules 
along with the geographic data would prevent an analogous situation. If 
Facebook implements a mechanism to "show my friends my location", and the 
rules somehow say "don't show my location on my profile", then Facebook is 
going to ignore the rules, on the (usually correct!) assumption that the 
rules don't reflect the user's actual preference.


> The AOL search logs release in 2006 is another example. Had those search 
> logs been accompanied by rules that said something along the lines of, 
> "don't post my search logs publicly," or even "ask me before releasing 
> my logs to researchers," maybe AOL would have thought twice about doing 
> those things. AOL developers spend a lot of time thinking about privacy, 
> and I don't think the search logs release was motivated by malicious 
> intent. But if AOL had received> a clear statement of user preferences 
> ahead of time, I find it hard to believe that they would have gone ahead 
> with the release anyway.

Not at all. AOL went way out of its way to "anonymise" the data. They 
thought they _had_ thought about privacy and thought that they had done 
everything to protect their users. They were wrong, but Geopriv-like 
information wouldn't have made them any more aware of this.


Both of these examples are further ridiculous because they imply that 
Geopriv in actual use would include such fine-grained rules, which is 
simply not going to happen, because (a) user agents wouldn't be able to 
produce UI that actually exposes such rules, (b) the rules aren't going to 
be used by the site since the files describing them are so insanely 
complex, and (c) the users wouldn't change the rules even if the previous 
two statements weren't a problem.


> John and I (and many others) do not believe that the consumer privacy 
> experience on the Web thus far has been a good one.

I think that this is something you'd have to convince browser vendors of 
_before_ attempting to convince them to change APIs and UI they expose. 

What you're doing now is trying to convince people to build a bridge out 
of iron instead of wood, before the builders have been convinced that the 
river needs a bridge in the first place.


> Study after study has shown that privacy policies are too long and 
> complicated for people to understand.

Why do you think that the UI that Geopriv would necessitate would be any 
better?


> When people see the words "Privacy Policy" at the bottom of a web page, 
> they falsely believe that their privacy is being protected.

Why do you think that exposing UI claiming to represent user preferences 
on privacy would not be at least as misleading?


> [...] we can move the ball forward by making user privacy preferences 
> explicit and putting the onus on web developers to respect those 
> preferences or face potential consequences.

I do not think Geopriv would work to do this. Authors would, rightly, be 
able to point out that the exposed preferences don't reflect the user's 
preferences. Exposing the rules XML files would IMHO _hurt_ the cause of 
improving the protection of privacy on the Web, as discussed above.


> I'm wondering what the value is of having this API be a W3C standard. 
> The approach of the UA developers seems to be to apply the W3C 
> imprimatur to a minimal set of functionalities that are already in 
> existence across the browsers. Couldn't the UAs just get together and 
> agree on the interface they want and publish it, without going through 
> the W3C process?

The UAs getting together and agreeing on an interface and publishing it 
_is_ the W3C process.


> Why can't the standards process support the progress of ideas to the 
> point where they are in wide use, rather than being limited to waiting 
> until they're already accepted and then rubber-stamping them?

For two reasons; first, because designing APIs and languages in a vacuum 
results in very poor APIs and languages as compared to basing APIs and 
languages on implementation experience. Second, because designing specs 
without buy-in from the vendors just results in those specs being ignored.

-- 
Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Received on Friday, 14 November 2008 00:24:22 UTC