[Bug 23169] reconsider the jitter video quality metrics again

https://www.w3.org/Bugs/Public/show_bug.cgi?id=23169

--- Comment #11 from Aaron Colwell <acolwell@google.com> ---
(In reply to Jerry Smith from comment #10)
> It’s true that frame delays would be quantized to the display refresh rate;
> however, total delay can still provide more information than counting late
> frames, assuming late frames are counted once per frame whether they are
> late a single refresh cycle or multiple.  That should mean that once the
> video stream is one frame late, every frame would be counted as late, the
> TotalLateFrame metric would expand and client JS would presumably respond by
> lowering the video quality.  If just one refresh cycle late, that may not be
> appropriate.
> 

If one frame misses its display deadline I wouldn't expect that to imply that
all future frames would miss their display deadlines too. Only under some sort
of constant load would I expect this to happen. In that case it might be a good
thing for the application to start thinking about downshifting because there is
load present that is preventing the UA from hitting its deadlines.


> TotalFrameDelay in this instance would accurately communicate that frames
> were running a specific time interval late, and JS would be allowed to make
> it's own determination on whether the delay is perceptible to users.  If,
> however, the stream moved to multiple refresh cycles delayed, this would
> show as a larger value in TotalFrameDelay, but not in TotalLateFrames.  

I have concern about leaving this up to the application to sort out. If the
delay goes beyond 100ms or so then it is definitely perceptable. Why defer to
the application here? Also, if frames are this late, why shouldn't the UA just
start dropping frames in an attempt to reestablish A/V sync? This should be a
minor & temporary blip in the counts reported if nothing serious is happening.

> 
> If this example is accurate, it would suggest that TotalLateFrames may more
> aggressively trigger quality changes, but perhaps not desirable ones; and
> TotalFrameDelay communicates more information that would allow tuning of the
> response to slight, moderate or large delays in the video stream.  The
> analog nature of the time data makes it a more desirable feedback signal in
> what is essentially a closed loop system.

I think the application should only react if there is persistant lateness
and/or dropped frames. I agree that responding to one off lateness would
definitely result in instability. 

I do have concerns though that the totalFrameDelay signal will have different
characteristics across UA implementations. I believe that will make writing
adaptation algorithms that are not UA specific difficult. I think using counts
might make this a little better, but different drop characteristics might lead
to the same problem.


In the absence of anyone else supporting my alternate solution and since no
other solution has been proposed, I'm happy to concede and just resolve this as
WONTFIX. The current text was already something I could live with so if the
concensus to to leave things as is, I'm fine with that.

-- 
You are receiving this mail because:
You are the QA Contact for the bug.

Received on Tuesday, 5 November 2013 02:32:54 UTC