W3C home > Mailing lists > Public > public-tracking@w3.org > April 2017

RE: issue 23

From: Mike O'Neill <michael.oneill@baycloud.com>
Date: Sat, 1 Apr 2017 22:20:34 +0100
To: "'Roy T. Fielding'" <fielding@gbiv.com>
Cc: <public-tracking@w3.org>
Message-ID: <05e201d2ab2d$cd80e420$6882ac60$@baycloud.com>
My answers in line Roy.

> -----Original Message-----
> From: Roy T. Fielding [mailto:fielding@gbiv.com]
> Sent: 01 April 2017 20:04
> To: Mike O'Neill <michael.oneill@baycloud.com>
> Cc: public-tracking@w3.org
> Subject: Re: issue 23
> No. Absolutely not.
> You are making an assumption that user agents will be receiving
> the TSR on a regular basis.  In almost all cases, that will not be true.
> The TSR has no role in the consent dialog. The TSR isn't needed to protect
> users.  Normal users don't need any response for tracking protection.
> They just send DNT:1 and deal with consent dialogs when needed.

User agents have to get the TSR to see what the tracking status is for a
resource. They also have to get it to check if an origin respects DNT and
has TSR, and is therefore allowed to call the API. They may also examine the
same-party array "to inform or enable different behaviour that are claimed
to be same-party". All this is in the existing TPE. 

The requirements in EU law will mean more applications will involve changing
user agent behaviour contingent on server declarations.

> What the TSR supplies is information for regulators and for those who,
> for whatever reason, have tracking visualization going on in an active
> In other words, folks using a special extension that actively interferes
> in the normal day-to-day interaction with web sites for the express
> purpose of either reporting on site tracking, creating a whitelist, or
> non-blacklist filtering of third-party requests.

User agents (either browser + extensions or browsers alone) are increasingly
taking on the role of privacy guardian for users -  third-party cookie
blocking, tracking protection lists (MS and FF flavours),
content-security-policy, embedded enforcement, HSTS,  adblockers, privacy
extensions like PrivacyBadger - this list goes on and on. Many have to make
assumptions about dangers based on flimsy evidence - (e.g.  does this
subresource place cookies?) but others require declarative involvement of
servers e.g. content security.

This will not diminish, in fact it will become legally required. It is of
course true that now user agents do not bother to acquire TSRs, P3P policies
and the like, but this will change.

> TSR requests are unlikely to be a common case even if DNT is 100%
> deployed and actively checked by every user. We have carefully designed
> the protocol to enable caching of those responses for much longer
> than a single session.
> Server push was not designed to enhance caching!  What it does is reduce
> latency for non-cached responses by anticipating some requests for which
> a response is unlikely to be in the user agent cache.  For example,
> sites that use generation-based identifiers can trigger a push when
> the client requests a page without using a conditional GET matching
> the current generation.  Also, server push enables user-specific
> responses (notifications) when a user logs-into an authenticated site.

I just mentioned the caching section as a convenient place to mention Server
Push, it is there to reduce latency because user agents might want to modify
their behaviour for any subresource.

The  Origin Policy API calls for it for exactly the same reason - it is an
origin wide resource (manifest) that could affect how any other resource is
processed by the user agent.

I agree the extra turn-round is unfortunate, but that was why I suggested
having an option to return it in a header or content. You argued against
that and now we only have dynamic access. Server Push is a feature that can
help mitigate that.

> Server push would be a complete disaster if protocols were allowed
> to require the sending of bits that the user agent does not need.
> We'd rip it out of h2 if that ever becomes the case.

Origin-Policy has request header for this. We already have DNT so that will
do. Maybe we have to define an extension but I do not think so.

> ....Roy
> > On Apr 1, 2017, at 1:30 AM, Mike O'Neill <michael.oneill@baycloud.com>
> wrote:
> >
> > This is something I realised when thinking about issue 23.
> >
> > The new version of HTTP , HTTP/2 (RFC7540) has a feature called "Server
> > Push" https://tools.ietf.org/html/rfc7540#section-8.2
> >
> > This lets servers pre-emptively send other resources when responding to
> > request.
> >
> > "This can be  useful when the server knows the client will need to have
> > those responses available in order to fully process the response to the
> > original request".
> >
> > Since the user agent may adjust behaviour based on the current Tracking
> > Status Resource we should recommend that servers use Server Push.
> >
> > The following text could be added at the end of 6.4.4 Caching:
> >
> > To ensure that user agents always have the most recent Tracking Status
> > Resource in their cache, servers SHOULD use the Server Push mechanism
> > defined in [RFC7540] whenever a state-changing request may have changed
> >
> > Send a PUSH_PROMISE frame with a minimal request for the TSR, aligning
> > the request generated in 3.4.2 Process response for Origin Policy.
> > Begin delivering the response to the PUSH_PROMISE request.
> >
> >
> >
Received on Saturday, 1 April 2017 21:21:37 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:40:34 UTC