- From: <bugzilla@jessica.w3.org>
- Date: Sat, 29 Oct 2011 09:55:59 +0000
- To: public-html-bugzilla@w3.org
http://www.w3.org/Bugs/Public/show_bug.cgi?id=12399 --- Comment #26 from Silvia Pfeiffer <silviapfeiffer1@gmail.com> 2011-10-29 09:55:57 UTC --- (In reply to comment #22) > How does one measure the number of bytes going into the demuxer in a way that > makes sense cross-browser? The WebM demuxer Opera uses is a bit particular in > that it reads overlapping blocks from the input, not just consecutive blocks. The metrics here are really not about comparing browsers with each other. They are about measuring the quality of service that the user sees in video and about allowing the video publisher to determine where quality problems originate from: the network, the browser, or the device (i.e. is the computer overloaded). Having the metrics available allows the video publisher to take measure to counter-act against poor video quality and fix it, e.g. get a better network service (a better CDN), file browser bugs, or change the resource appropriately that is being delivered (smaller resolution, lower bitrate etc). > If one just counts the bytes going in, that would exceed the size of the entire > file after a single playthrough due to the overlap. The bytesDecoded measure is about measuring what bytes have come out of the decoding pipeline, not what has gone in. If you are referring to the bytesReceived, well they are not measured at the point where they are fed to the demuxer, but right after they have been received from the network, so should not be counted doubly. I envisage the bytesDecoded to be polled frequently so as to provide a bitrate measure. I.e. at time 1sec of the video playback we have bytesDecoded=8K, at time 2sec we have bytesDecoded=12K, at time 3sec we have bytesDecoded=12K. Assuming the bytesReceived has been growing continuously over this time, this tells us that something is hanging in the decoding pipeline. > Another issue is that the demuxer is involved in QoS, skipping forward in the > input if there have been dropped frames in order to catch up. This would > influence the measurement of both incoming or outgoing bytes. Yes it would, but because you have the metric droppedFrames, you can determine that this has happened and that your bytes arrived too late. > Even trying to measure something like the number of expected frames is a bit > hard, because when the demuxer skips forward to catch up it can't know how many > frames it just skipped, unless it spends time trying to figure that out and > thereby falling further behind. > > Saying how many frames were decoded is not a problem, but anything upstream to > that in the pipeline seems a bit dodgy as long as one has some kind of QoS in > the demuxer. So, you're saying that it's not possible to measure droppedFrames? Webkit is doing it. However, we can discuss whether we should replace the {decodedFrames, droppedFrames, presentedFrames} set with {decodedFrames, presentedFrames, paintedFrames} as Mozilla has implemented. > It perhaps shouldn't be surprising that it's hard for JavaScript > to adapt the quality when the decoding pipeline is trying to do the same > thing... The decoding pipeline is trying to do the best with the data it has been given. JavaScript has the possibility to replace that data with something that the decoding pipeline can deal with more easily. I don't see a conflict at all. -- Configure bugmail: http://www.w3.org/Bugs/Public/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are the QA contact for the bug.
Received on Saturday, 29 October 2011 09:56:01 UTC