[Bug 23169] reconsider the jitter video quality metrics again

https://www.w3.org/Bugs/Public/show_bug.cgi?id=23169

--- Comment #7 from Aaron Colwell <acolwell@google.com> ---
(In reply to Jerry Smith from comment #6)
> The rationale over all is that there needs to be something that measures
> frame lateness, and that having dropped frames is not enough.

Why does the late frame counter not satisfy this? Is there a specific reason
you need to answer the "how late" question?

> 
> We believe frame delay communicates more information than late frames.  We
> expect that the totalFrameDelay metric would be monitored at some interval. 
> The usage of totalFrameDelay would then be:
> 
> 1)  Not changing value – good quality playback
This is equivalent to late & dropped frame counter not incrementing.

> 2)  Uniformly increasing value – consistent A\V sync (video is behind) but
> no further improvements or degradations (no jitter)

Why does the application care about this case? Isn't it up to the UA to make
sure that the audio & video are rendered with proper A/V sync? What is the web
application supposed to do about it, if the UA isn't doing this properly? This
seems like a bug in the MSE implementation and not something that the web
application should need to worry about.

> 3)  Non-uniformly increasing value – most likely worsening playback (video
> falling further behind) or jitter caused by the application trying to
> compensate by reducing resolution, etc (i.e. improving playback)

I feel like late and/or dropped frames capture this as well just in a slightly
different way. I'd expect late & dropped frame counters to increment
non-uniformly in this situation as well.

> 
> So, the totalFrameDelay attribute on its own can provide useful information.

I agree, but it isn't clear to me that exposing such detailed timing
information is really necessary. What if we just had an enum that indicated
that the UA believes it is in one of those 3 states? That seems like a much
clearer way to convey the quality of experience to the web application instead
of exposing the totalFrameDelay metric.

> 
> It's possible for a system to go into frame-dropping mode and still be in #1
> above since the frames that aren't dropped are still on-time. That state
> would be detected by the droppedVideoFrames attribute.

I think this would be equivalent to the late counter not incrementing and the
dropped counter incrementing. The application could decide if this was an
acceptable experience or not.

Ideally I'd like to get away from this time based metric because I believe it
may be difficult to get consistent measurements across browsers. I think
different measurement precisions and differences in various delays in each
browser's media engine will cause this metric to be unreliable or may encourage
browser specific interpretation. If that happens, I think we've failed.

-- 
You are receiving this mail because:
You are the QA Contact for the bug.

Received on Tuesday, 15 October 2013 20:34:57 UTC