W3C home > Mailing lists > Public > public-tracking@w3.org > March 2013

Re: TPE Handling Out-of-Band Consent (including ISSUE-152)

From: Ronan Heffernan <ronansan@gmail.com>
Date: Mon, 25 Mar 2013 20:14:25 -0400
Message-ID: <CAHyiW9Jtggi1XmokPUA2mY6xNkf_4PwdCun2R0NR4A-nJto+jA@mail.gmail.com>
To: Dan Auerbach <dan@eff.org>
Cc: public-tracking@w3.org
> what's the methodology for diversifying the panel?

I am only a software engineer.  We have a couple of different departments
that come up with experiments, sub-panels, surveys, etc., to answer
questions like that.

> It seems to me that this presents a perfect opportunity for measuring how
honoring DNT:1 might affect your panel studies,

You're right; that does sound like research that could be useful (but only
if it yields somewhat consistent results for decently large populations).
However, the way to conduct that research and measure the
population-effects of DNT:1 would be to collect the DNT-status of all of
our panelists and then compare the behavior of the DNT:1 users against the
other types of users.  We would probably also have to *try* to divide the
DNT:1 population into "turned-on without knowledge or consent" and
"turned-on deliberately", to see if these two populations exhibited
consistent behavior.  This measurement would not be done by honoring DNT,
but by relying on our OOBC to collect DNT-status along with the rest of the
information.  If the scaling factors are to be used for anything important,
the research would also need to be repeated over time (every two years?) as
devices, User Agents, websites, "tracking" technologies, and cultures
change, to see how the scaling factors change.

> If the bias is predictable, you could potentially correct for it in the
future and gleefully ignore DNT:1 users even when you have OOBC, and still
get the same results to a high degree of accuracy.

Anything is possible, though I wouldn't bet on that.  This is especially
true because panel-based research (compared to "census" research) can be
used to analyze smaller subsets, by demographics, psychographics, geography
(sometimes down to city or DMA level), etc.  Even if you have a decent
general rule, when you start sub-setting populations the rules are less
trustworthy, and since the panel size is smaller, errors are much more
pronounced.

Fortunately, none of those decisions are the burden of software engineers.
:-)

--ronan


On Mon, Mar 25, 2013 at 7:32 PM, Dan Auerbach <dan@eff.org> wrote:

>  Yes, I think it does take evidence. A few questions that spring to mind
> which seem quite relevant are: what's the methodology for diversifying the
> panel? And how precise is the segmentation? How do you measure panel bias?
> Right now you are receiving DNT:1 headers but presumably not honoring them,
> given that that we haven't yet set a standard. It seems to me that this
> presents a perfect opportunity for measuring how honoring DNT:1 might
> affect your panel studies, both for users who have given OOBC, as well as
> for those who haven't and for whom you are obliged to honor DNT. Have you
> conducted experiments along these lines, or do you plan to? If the bias is
> predictable, you could potentially correct for it in the future and
> gleefully ignore DNT:1 users even when you have OOBC, and still get the
> same results to a high degree of accuracy. It seems worth at least looking
> into this possibility, right?
>
> As I've said, I think folks in the group -- myself included -- wouldn't
> want to ask you to do something impossible or needlessly tie your hands.
> But a little bit of hard data from knowledgeable software engineers in
> industry like you goes a long way to help educate the group as to your
> needs, and to move the conversation forward.
>
> Dan
>
>
Received on Tuesday, 26 March 2013 00:15:50 UTC

This archive was generated by hypermail 2.3.1 : Friday, 3 November 2017 21:45:07 UTC