Re: cross-site tracking and what it means

I think I'm missing it. Where in that discussion is a consensus on 
baking legal precautions into the standard?

On 1/20/12 5:15 PM, Jonathan Mayer wrote:
> On Jan 20, 2012, at 4:08 PM, David Wainberg wrote:
>> On 1/20/12 1:19 PM, Jonathan Mayer wrote:
>>>>> There was consensus at Santa Clara that the outsourcing exception 
>>>>> requires both legal and technical precautions.  There was not 
>>>>> consensus about what those precautions are against, either 1) 
>>>>> collecting data that could be correlated across first parties, or 
>>>>> 2) commingling data across first parties.  David's draft text 
>>>>> proposes the former rule (which de facto encapsulates the latter 
>>>>> rule), and I am in support.
>>>> I do not recall consensus on this. Wasn't there dissent regarding 
>>>> the feasibility of such precautions? My view is that legal 
>>>> requirements in this technical standard are not workable.
>>> See the minutes from the first day of Santa Clara.  Shane's proposal 
>>> was a MUST on legal precautions and a SHOULD on technical 
>>> precautions.  My proposal was a MUST on legal precautions and a MUST 
>>> on technical precautions - including origin-scoped data.  The group 
>>> compromised with a MUST on legal precautions and a MUST on technical 
>>> precautions, with a non-normative suggestion of origin-scoped data. 
>>>  It was a great example of consensus-building through compromise.  I 
>>> hope Brussels will follow suit.
>> Can you point me to this in the minutes? I don't recall a consensus 
>> on baking legal precautions into the standard.
> Discussion of ISSUE-73.
>>>>> I believe there is consensus on how the most common widget use 
>>>>> cases should turn out.  There appeared to be consensus on the list 
>>>>> and in calls to apply a user expectations test to borderline 
>>>>> cases, but that consensus may no longer exist.
>>>> No, I don't think we had consensus that a "user expectations" test 
>>>> should be used. A user expectations standard is absolutely 
>>>> unworkable for companies trying to implement. From my conversations 
>>>> with others in industry, the one thing almost everyone says they 
>>>> want out of DNT is clarity. The ambiguity of a user expectations 
>>>> test would, in my view, be a disaster.
>>> My text on widgets in late October included an objective user 
>>> expectations component.  Tom's subsequent text included a subjective 
>>> user intent component.  In the lengthy discussions of both texts - 
>>> in calls, on the list, and in person - I cannot recall any objection 
>>> to the reliance on user expectations.  That said, I recognize that 
>>> many in the group are now uncomfortable with a user expectations 
>>> approach, and that's why I noted that to the extent there was 
>>> consensus earlier, it "may no longer exist."
>>> As for a user expectations test being "absolutely unworkable" and 
>>> lacking "clarity" - I thoroughly disagree, and despite protracted 
>>> grousing from many in the group, I have yet to see use cases to the 
>>> contrary.  I'm sure it'll make for lively discussion in Brussels.
>> It's one thing to use our notions of user expectations to guide our 
>> development of the spec. That's fine. However, it's something else to 
>> bake it into the spec as a standard that must be met. And regardless 
>> of any consensus in this group, my point as someone who actually may 
>> have to implement this at a company is that it's too way too vague.

Received on Friday, 20 January 2012 22:56:47 UTC