- From: <bugzilla@jessica.w3.org>
- Date: Wed, 12 Oct 2011 18:12:19 +0000
- To: public-html-bugzilla@w3.org
http://www.w3.org/Bugs/Public/show_bug.cgi?id=12399 --- Comment #19 from Max Kanat-Alexander <mkanat@google.com> 2011-10-12 18:12:12 UTC --- To be clear, one of the major things we want to do with the performance data is validate or invalidate experiments. For example, let's say we want to turn on a new feature for 1% of all our users, and see if, on the aggregate, it affects their framerate. We would do this by sampling the framerate on the client side (or simply watching the number of dropped frames). Then we would send that information back to our servers, where it could be aggregated and tagged as being a part of this experiment. Then we would compare that to our control numbers (the aggregate averages from the other 99%) and see if there was significant deviation. When we do this analysis, we want to know very specifically what is causing the changes in the numbers. Are we causing there to be more time spent downloading? Are we causing more dropped frames? So it's true that the things happening on an individual user's machine may bias the data on that individual machine, but when we aggregate the data, if we have a large enough sample size, that sort of noise should be insignificant. -- Configure bugmail: http://www.w3.org/Bugs/Public/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are the QA contact for the bug.
Received on Wednesday, 12 October 2011 18:12:22 UTC