[Bug 23169] reconsider the jitter video quality metrics again

https://www.w3.org/Bugs/Public/show_bug.cgi?id=23169

Aaron Colwell <acolwell@google.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|NEW                         |ASSIGNED
           Assignee|adrianba@microsoft.com      |acolwell@google.com

--- Comment #5 from Aaron Colwell <acolwell@google.com> ---
(In reply to Mark Watson from comment #4)
> Aaron,
> 
> What you describe assumes an implementation which drops late frames except
> the first. That's one possible implementation. What I understand is that
> there are other implementations where there could be a run of late frames.

True. In this case, I'd expect the lateVideoFrames counter to be incremented
for each frame that was late.

> 
> Specifically, I believe there are implementations where frames are
> accompanied through the pipeline not by their absolute rendering time but by
> the inter-frame interval. In such an implementation there can be an
> accumulating mis-alignment between the correct and actual rendering time. I
> believe in the implementation in question such an accumulation is detected
> after some short time - possibly multiple frames - and accounted for by
> eventually dropping frames.
> 
> The totalFrameDelay was intended to enable detection of this condition by
> the application before or in concert with dropped frames.

It seems like the effectiveness of this metric is based on how deep that
pipeline is. Is there a case where incrementing the lateVideoFrames won't cause
droppedVideoFrames to at least increment by one? It seems like as soon as media
engine determines that a bunch of frames are late it would start dropping
frames to "catch up".
- What is the scenario where late frames are tolerated for a while w/o
triggering frame dropping?
- How often do you expect the web application to poll these stats to detect
this condition?
- How long do you expect the delta between detecting late frames and the media
engine taking action to drop frames would be?

I'm concerned that the window to take action on "lateness" is too small to be
worth worrying about. 

> 
> At a first look, it seems like a count of late frames would also suffice for
> the same purpose. The count does not distinguish between a frame that is a
> little bit late and a frame that is a lot late.

Presumably "a lot late" should trigger a ton of dropped frames so the media
engine could catch up. This should look catastropic to the web app and trigger
a downshift I would hope.

> Conversely, the
> totalFrameDelay does not distinguish between a number of frames that are
> each slightly late and a single frame which is very late. I assume we do not
> ever expect an individual frame to be very late (like 10s of frame
> intervals), so neither of these is a problem and we could choose based on
> implementation complexity / complexity of definition. The latter favors the
> late frame count.

I'm just trying to sort out whether the application really needs to know the
time delta or not. It doesn't seem like the actual time matters because there
is nothing the application can do about that. It seems like counts at least
provide a signal where the application can compute the percentage of lateness
and dropped frames and use those as a signal of quality. The counts are also
robust across frame rate changes. If you deal with time, then changes in frame
rate may effect the acceptable "lateness" threshold that one uses for
adaptation.

> 
> I will also check with our implementors.

-- 
You are receiving this mail because:
You are the QA Contact for the bug.

Received on Tuesday, 8 October 2013 16:18:13 UTC