Re: Questions on FloC result & experimental methodology

Hi Deepak,

Thank you for openness and transparency.

We raised this issue 45 days ago:
https://github.com/google/ads-privacy/issues/34, to which you answered
after last Tuesday's meeting by sending this link that has been available
for a while:
https://github.com/google/ads-privacy/blob/master/proposals/FLoC/Floc-live-experiments.md.


I actually quoted this very link in the email I sent to this group a month
ago ("discussion on FLoC performance?"), as I found it not satisfying to
understand where the 95% came from, and what it did represent exactly. We
can surely write down an extensive list of things that would be of
interest, but I have to say that we still don't have a proper analysis to
build it from. Nothing that could be peer reviewed, nothing that would
allow someone to reproduce the experiment, nothing that even describes the
actual experiment.

That's what we all need here:
*an analytical paper describing the experiment, its scope, and the results,
to the point where it would allow reproducibility. *
It would certainly help everyone understand how representative this number
is for the advertising ecosystem (does it cover 1% of use cases? 10%?). And
could be used as a reference to redirect scholars, marketers and
journalists who as per your comment misrepresented the test and results.

Best
Arnaud

>

Received on Friday, 19 March 2021 16:59:12 UTC