Re: Intended usage notification


For what it's worth, we implemented this in Gears. The reasoning was
similar to what you described. Our experience with this is that it
does provide a modest benefit for good sites that can add more context
to the permission dialog. The downside is that this feature allows bad
sites to add text that may detract the user's attention from the
actual URL of the site that's asking for permission. Bad sites can add
their own URL go the snippet (in ways that are hard to detect) making
matters even more confusing to the users. So, in retrospect, I am not
sure this really improved anything. In any case, as an API
implementer, I would certainly not like to be forced to implement
something like this, hence I don't think this should be in the spec.


On Thu, Mar 26, 2009 at 10:17 PM, Thomson, Martin
<> wrote:
> Trust is not a binary operation on all aspects.
> The thought process goes thus:
> - I trust this site not to lie.
> - This site just asked me if I wanted to be advertised at based on my location: reject.
> - This site just asked me if I wanted to display a map of my vicinity: allow.
> What the current arrangement does is forces users to have a reasonably good conceptual model of what is going on in the web page in order to make an informed decision when the prompt is offered.  I don't believe that an average user is capable of building a useful model.
> The current model leads to users to think: ``the last time I clicked "reject" the site didn't work.''  This has the effect of training users to blindly click accept.
> I'm merely suggesting a low-cost improvement to this training problem.
> Cheers,
> Martin
>> -----Original Message-----
>> From: Ian Hickson []
>> Sent: Thursday, 26 March 2009 3:06 PM
>> To: Thomson, Martin
>> Cc: Greg Bolsinga; Doug Turner;
>> Subject: RE: Intended usage notification
>> On Thu, 26 Mar 2009, Thomson, Martin wrote:
>> >
>> > This is not intended to be binding, so liars will be free to do that.
>> Then what's the point?
>> The good sites aren't the ones that are going to be a privacy risk for
>> users. The ones that are the problem are the malicious sites that are
>> going to, I dunno, sell the location of rich people using their site to
>> organised thieves. And those are the very sites who will lie.
>> In other words, there are two kinds of sites, and two kinds of prompts:
>>                     Prompts that are honest    Prompts that are lies
>>    Sites that are   The prompt doesn't         Won't happen, since
>>   trustworthy and   matter, since the user     the sites are honest
>> won't do anything   won't be screwed           (by definition)
>> bad with the data   either way
>>   Sites that want   Won't happen, since        The prompt doesn't
>>     to abuse your   the sites are dishonest    matter, since it is
>>     location data   (by definition)            a lie
>> > This establishes a common expectation from users.
>> That's the problem. It leads users to believe a prompt that can just as
>> easily be a lie.
>> It would be the equivalent of teaching users to give their credit cards
>> to
>> random strangers based purely on the excuse the strangers give, instead
>> of training users to look for other clues, such as the reputation of
>> the
>> site, to make their decision.
>> --
>> Ian Hickson               U+1047E                )\._.,--....,'``.
>> fL
>>       U+263A                /,   _.. \   _\  ;`._
>> ,.
>> Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-
>> .;.'
> ------------------------------------------------------------------------------------------------
> This message is for the designated recipient only and may
> contain privileged, proprietary, or otherwise private information.
> If you have received it in error, please notify the sender
> immediately and delete the original.  Any unauthorized use of
> this email is prohibited.
> ------------------------------------------------------------------------------------------------
> [mf2]

Received on Thursday, 26 March 2009 22:35:41 UTC