- From: <bugzilla@jessica.w3.org>
- Date: Tue, 25 Oct 2011 06:04:32 +0000
- To: public-html-bugzilla@w3.org
http://www.w3.org/Bugs/Public/show_bug.cgi?id=12399 --- Comment #21 from Silvia Pfeiffer <silviapfeiffer1@gmail.com> 2011-10-25 06:04:30 UTC --- We need to be able to measure all three: the network performance, the decoding pipeline and the rendering pipeline. Each of these bear different information and result in different consequences/actions. >From a network POV we can only deal with bytes. The decoder gets bytes as input. It's not really possible to count how many frames go into the decoder, because the framing is done as part of decoding IIUC, so counting the decoded bytes lets us know how much the decoder dropped. The decoder can then tell the number of frames it outputs. The renderer deals only with frames. The proposed metrics in the wiki cover measuring the performance of all these three steps. Jitter is an aggregate metrics that is better calculated from the more detailed information that the other metrics provide, so we should not use Jitter. But the other metrics in the wiki make sense to me and seem sufficiently independent of each other. -- Configure bugmail: http://www.w3.org/Bugs/Public/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are the QA contact for the bug.
Received on Tuesday, 25 October 2011 06:04:34 UTC