Re: technical issues with multiple first parties

On Mar 17, 2013, at 5:58 PM, Edward W. Felten wrote:

> On the last call, I expressed technical reservations about the proposal to allow multiple first parties on a page.  Peter asked me to elaborate on my concerns in an email to the group.

I don't want to get too far into a theoretical discussion, but I
am confused by what you mean by first-party. If we talk about
"first-party" in terms of the user's intent to interact with a
given service, then we cannot allow or disallow multiple
legal entities to be the recipient of such a user's intent.
What role do we have in overriding the user's intent?

Even if we talk about "first-party" in a purely technical sense
as the recipients of data for the set of IP addresses pointed to
by the set of domains within the cookie scope of the primary page
(the basis for browser decisions regarding so-called
"third-party cookies"), there is no suggestion within the Internet
protocols that such data recipients are all controlled by a single party.

A page with multiple first parties just means that the data received
via the interactions on that domain might be copied to both parties
and separately controlled by those parties.  It does not reduce each
recipient's adherence to our protocol.  For the EU folks, this is
equivalent to thinking we can allow or disallow join controllers.
We can't.  Given the category exists, we have to provide a way to
communicate when that category applies and to identify who is in
the set of controllers for the sake of transparency.  Hence, we
now have a list in the TSR for that purpose.

> The core issue is that we would be invalidating some basic technical assumptions that we have been making since very early in the process.   My concern is that those assumptions are "baked in" to the system's design so deeply that undoing them would cause technical problems to pop up.

I believe we've been designing to this case since day 1, though
we found a few bugs along the way.  There may be more.

> One example of an assumption we would be undoing is the assumption that the User Agent (UA) knows who the first party is before it sends an HTTP request.  The exception system says the UA is supposed to send DNT:0 when the user has granted an exception for the first party.  This works fine when the identity of the first party is evident from the URI, because the UA always knows the URI before sending a request.

To be clear, we could not possibly design such a thing.  There is no
mechanism on the Internet to determine party scope, and none within a
browser to determine user intent. The exception mechanism depends
entirely on domain matching, which is not the same thing as party
matching.  Hence, we are matching domains -- ownership is not relevant.
We are relying on the companies that request an exception to be doing
so without deceptive practices, which is a safe assumption given
that the penalty for being deceptive is far greater than simply
not implementing the protocol at all.  DNT is not a security protocol.

> Suppose the user clicks a link to http://www.examplesite.com, and the user has previously granted an exception for examplesite.com.  Should the browser send DNT:0 with the request?   If examplesite.com is the only first party, then DNT:0 should be sent.  But if there might be an additional first party, then the UA shouldn't send DNT:0 because it doesn't know who the additional first party might be (and therefore can't know whether the user has granted an exception to the additional first party).   The only way for the UA to figure out whether there is an additional first party is to load the Tracking Status Resource (TSR) from a well-known URI on examplesite.com, and look in the TSR to see if there is another first party, before it can access the URI that the user actually wants.

Presumably, if the UA cared about such things, then it would make that
TSR check at the time the exception is granted to send DNT:0 and
monitor the TSR for subsequent changes.

The exception protocol itself seems clear: if the domain matches one
of the granted exceptions, then send DNT:0.  I don't see why this
has anything to do with who owns the domain or how many entities
might receive data at that domain.

> Because any page *might* have an additional first party, this would appear to require the UA to pre-load the TSR before accessing any URI for which it would otherwise be willing to send DNT:0.  This makes access to sites with exceptions much slower.  (Note that caching the TSR would have limited value here because examplesite.com would have to use a page-specific TSR for the page that has an additional first party, in order to convey the first-party information specific to that page.)

A site that has page-specific TSRs is going to be apparent from the
first access to the site-wide TSR.  Caching has exactly the value
that it is capable of providing: any TSR can be cached, so the only
concern is how many different TSRs have to be retrieved.  The answer
to that is hopefully no more than the number of distinct statuses
the server might have, and exactly as many as the UA wants to review.

To be clear, there are NO KNOWN PRODUCTS that have any intention of
ever retrieving the TSR.  We have exactly one developer in the WG
who has an interest in potentially using it for visibility tools.
Tracking protection does not depend on it.  Our protocols do not
depend on the user having any awareness whatsoever of the domains,
owners, or number of parties on a page.  The only thing that matters
to anyone who is not a regulator (or browsing with the intent to
sue broken implementations) is that the site indicates compliance
to the protocol. The user will be interacting with exception dialogs
that probably won't even mention domain names.

Yes, performing additional checks for the sake of active verification
of tracking status, before every access to a Web subpage resource,
would slow down the browser.  Since there are precisely zero such
browsers in existence, that seems to be a reasonable trade-off.
When such browsers are created, then the cost will be incurred to the
extent that the user wishes to verify, which is exactly where the
cost should be incurred.

> The need for the UA to load the TSR in order to behave correctly undoes another significant early design decision, which is that loading the TSR would always be optional, in the sense that a UA could comply with the standard even if it never loaded a TSR.   Loading the TSR lets the UA implement useful features, but whether and when to do so has been up to the UA developer.  (This is important for resource-constrained UAs such as some mobile browsers.   It also provides valuable engineering flexibility even for UAs that want to use the TSR, because it lets the UA developers make a case-by-case decision about the cost vs. benefit of accessing the TSR in each specific instance.)

Again, the exceptions are recorded as a list of domain matching rules.
They do not have any correspondence to how many first parties there
might be on a given page.  Hence, the UA doesn't need to load the TSR
(though it might want to when allocating exceptions) and the protocol
behaves correctly without such knowledge.

I think a reasonable concern would be: is a first party allowed to
request an exception for a multiple first party domain when a user
visits any domain not controlled by the same multiple first party?
I would say "no".  As a requirement, I would state that a given
site MUST NOT request exceptions for any other site that does not
exactly match its own set of data-controllers, which is both fair
and verifiable by comparing the TSRs.

> I haven't done a comprehensive review of how adding extra third parties affects the implementability of the standard, but I fear that a more detailed review would discover more problems.  

I would like to encourage more detailed reviews regarding this or
any other aspects of the protocols.  My concern is that there were
a bunch of premature design decisions made early on based on the
original schedule and quite a bit of misunderstanding of how the
Web works.

For example, the dependency on javascript APIs to store and
communicate declarative statements is "odd".  The API we started
with was javascript because its only purpose was to communicate
the DNT status to scripts via the DOM.  I also have to wonder why
we don't just make "same-party" a link relation (and the default when
the eTLD+1 of target matches origin) and let pages add it to the
mark-up directly when they want to expand the same-party scope.

> (Of course, these issues are not a problem in the case we have long discussed in which a third-party element on a page acquires first-party status when the user interacts with it.)

Right, that would be the next intentional request.  And we should
be clear that no element on the page ever "acquires first-party status".
The elements on the page are entirely different resources from the
target resource that is requested after a user intentionally
interacts with a widget, just like the target resource of a hypertext
link is entirely different from the word(s) selected on the page.


Cheers,

Roy T. Fielding                     <http://roy.gbiv.com/>
Senior Principal Scientist, Adobe   <https://www.adobe.com/>

Received on Monday, 18 March 2013 23:43:57 UTC