[Bug 23169] reconsider the jitter video quality metrics again

https://www.w3.org/Bugs/Public/show_bug.cgi?id=23169

--- Comment #4 from Mark Watson <watsonm@netflix.com> ---
Aaron,

What you describe assumes an implementation which drops late frames except the
first. That's one possible implementation. What I understand is that there are
other implementations where there could be a run of late frames.

Specifically, I believe there are implementations where frames are accompanied
through the pipeline not by their absolute rendering time but by the
inter-frame interval. In such an implementation there can be an accumulating
mis-alignment between the correct and actual rendering time. I believe in the
implementation in question such an accumulation is detected after some short
time - possibly multiple frames - and accounted for by eventually dropping
frames.

The totalFrameDelay was intended to enable detection of this condition by the
application before or in concert with dropped frames.

At a first look, it seems like a count of late frames would also suffice for
the same purpose. The count does not distinguish between a frame that is a
little bit late and a frame that is a lot late. Conversely, the totalFrameDelay
does not distinguish between a number of frames that are each slightly late and
a single frame which is very late. I assume we do not ever expect an individual
frame to be very late (like 10s of frame intervals), so neither of these is a
problem and we could choose based on implementation complexity / complexity of
definition. The latter favors the late frame count.

I will also check with our implementors.

-- 
You are receiving this mail because:
You are the QA Contact for the bug.

Received on Tuesday, 8 October 2013 15:30:06 UTC