W3C home > Mailing lists > Public > public-html-media@w3.org > September 2013

[Bug 23169] New: reconsider the jitter video quality metrics again

From: <bugzilla@jessica.w3.org>
Date: Thu, 05 Sep 2013 18:24:26 +0000
To: public-html-media@w3.org
Message-ID: <bug-23169-5436@http.www.w3.org/Bugs/Public/>

            Bug ID: 23169
           Summary: reconsider the jitter video quality metrics again
    Classification: Unclassified
           Product: HTML WG
           Version: unspecified
          Hardware: PC
                OS: All
            Status: NEW
          Severity: normal
          Priority: P2
         Component: Media Source Extensions
          Assignee: adrianba@microsoft.com
          Reporter: singer@apple.com
        QA Contact: public-html-bugzilla@w3.org
                CC: mike@w3.org, public-html-media@w3.org


We are concerned about the new definition of the displayed frame delay, and the
use of this value to accumulate a jitter value in totalFrameDelay.

Displayed Frame Delay
The delay, to the nearest microsecond, between a frame's presentation time and
the actual time it was displayed. This delay is always greater than or equal to
zero since frames must never be displayed before their presentation time.
Non-zero delays are a sign of playback jitter and possible loss of A/V sync.

The sum of all displayed frame delays for all displayed frames. (i.e., Frames
included in the totalVideoFrames count, but not in the droppedVideoFrames

Here are our concerns:

1.  The use of microseconds may be misleading.  There is an implied precision
here which is rarely (if ever) achievable; by no means everyone can time 'to
the nearest microsecond' and sometimes the measurement has to be done 'before
the photons emerge from the display', at a point in the pipeline where the rest
of it is not completely jitter-free.

2.  In any case, frames are actually displayed at the refresh times of the
display;  display times are actually quantized to the nearest refresh time. 
So, if I was slightly late in pushing a frame down the display pipeline, but it
hit the same refresh as if I had been on time, there is no perceptible effect
at all.

3.  Thus, ideally, we'd ask for the measurement system to be aware of which
display refresh the frame hit, and all results would be quantized to the
refresh rate. However, in some (many?) circumstances, though the average or
expected pipeline delay is known or can be estimated, the provision of frames
for display is not tightly linked to the display refresh, i.e. at the place of
measurement, we don't know when the refreshes happen.

4.  There is a big difference in jitter between presenting 2000 frames all 5ms
late (consistently), and in presenting 50 of them 200ms late and the rest on
time, though for both we'd report 10,000ms totalFrameDelay. The 5ms late may
not matter at all (see above), whereas 200ms is noticeable (lipsync will
probably be perceptibly off).  There is nothing in the accumulation of values,
today, that takes into account *variation*, which is really the heart of what
jitter is about.

I don't have a proposal right now for something better, but felt it was worth
surfacing these concerns.  Do others have similar, or other, concerns, about
these measurements?  Or indeed, suggestions for something that might alleviate
these or other concerns (and hence, be better)?

I guess a big question is:  what are the expected *uses* of these two values?

You are receiving this mail because:
You are on the CC list for the bug.
Received on Thursday, 5 September 2013 18:24:27 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:33:01 UTC