[Bug 28379] [MSE] should buffering model be an option?

https://www.w3.org/Bugs/Public/show_bug.cgi?id=28379

--- Comment #2 from billconan@gmail.com ---
(In reply to Aaron Colwell from comment #1)
> This sounds like a quality of implementation issue to me. If the web
> application has allowed the media element to run out of data, why would it
> expect playback not to stop? How long of an underflow should the UA endure
> before actually stopping playback?
> 
> You are right that MSE is biased more towards providing smooth playback
> instead of low latency. The main reasons for that come from the ways the
> SourceBuffers can be manipulated and because of constraints imposed by media
> decode pipelines. It seems to me that if you are interested in low latency
> non-mutable presentations you should be looking more towards WebRTC instead
> of MSE.

the player stops for way longer time than the actual network delay. a small
hiccup can trigger few seconds video pause. I never said I expected playback
not to stop. the problem is how long it should stop.

I can easily repro this issue with chrome. for example, I create a 60 fps mp4
stream, but instead of generating video frames at 60 frames per second, I
generate frames at 55 fps.

The remote desktop use case would appreciate no buffering at all. so the video
player should play the video at 55fps if 60 is not achievable. but the reality
is that the video player pauses for 2 to 3 seconds for buffering.

To be honest, I don't understand the implementation difficulty of a buffering
model option. 


the mse doc says it defines a splicing and buffering model that facilitates use
cases like adaptive streaming, ad-insertion, time-shifting, and video editing.
if I were to make this standard, I would like to make all these use cases
options and let the programmer choose, because multi-mission often means
failure, like the the f35 jet fighter
http://sploid.gizmodo.com/the-designer-of-the-f-16-explains-why-the-f-35-is-such-1591828468


Speaking of WebRTC, I think it is a mess.

The real world is very hierarchical I think. There are small fundamental
building blocks that form more complex ones. The more complex ones form even
larger building blocks ... Strings make particles, particles make chemicals,
chemicals make cells, cells make creatures. This is how the universe works.

But web standards never respect this philosophy of building things. Before
there is a decent way of simply decoding a video frame in webpage, or a way of
streaming a video from an ip to another ip, there is WebRTC already. A huge
monster building block. This is like all you need is an extra ketchup, but you
are told ketchup only comes with fries. why is the udp hole punching needed if
the architecture is just servers streaming to clients? no wonder twitch uses
flash.

my experience with WebRTC is worse. the latency is way higher than
mse+websocket. Can't believe it is on udp. no way to control the video quality.
no way to tell it to favor low fps and high bitrate...

-- 
You are receiving this mail because:
You are the QA Contact for the bug.

Received on Wednesday, 1 April 2015 02:15:18 UTC