[whatwg] Limiting the amount of downloaded but not watched video

2011-01-17 23:32 EEST: Silvia Pfeiffer:
> On Mon, Jan 17, 2011 at 10:15 PM, Chris Pearce <chris at pearce.org.nz> wrote:
>> Perhaps we should only honour the downloadBufferTarget (or whatever measure
>> we use) when the media is in readyState HAVE_ENOUGH_DATA, i.e. if we're
>> downloading at a rate greater than what we require to playback in real time?
> 
> Hmm... it's certainly a necessary condition, but is it sufficient?
> 
> Probably if we ever end up in a buffering state (i.e.
> networkState=NETWORK_LOADING and readyState=HAVE_CURRENT_DATA or less)
> then we should increase the downloadBufferTarget or completely drop
> it, since we weren't able to get data from the network fast enough to
> continue feeding the decoding buffer. Even if after that we return to
> readyState=HAVE_ENOUGH_DATA, it's probably just a matter of time
> before we again have to go into buffering state.
> 
> Maybe it's more correct to say that we honour the downloadBufferTarget
> only when the readyState is *always* HAVE_ENOUGH_DATA during playback?

I think that downloadBufferTarget (seconds to prebuffer) should not be
content author specifiable. A sensible behavior would be

1. Set downloadBufferTarget to UA defined default (e.g. 5 seconds)
2. In case of buffer underrun, double to downloadBufferTarget and store
this as the new default for the site (e.g. domain name)

This way the UA would (slowly?) converge to correct downloadBufferTarget
for any site for any given network connection. If the full length of the
video clip is known, then downloadBufferTarget should probably be more
along the lines of multiplier of full video clip length instead of
static time period in seconds. This is required because a 5 second
buffer could be enough for a 20 second clip but a 2 minute buffer could
be required for one hour video. In both cases, the actual available
network bandwidth is some ratio slower than the required bandwidth, not
a static time period buffer as would be in case of simple delays. The
default buffer should be pretty small to allow small delays in the start
of the playback for high bandwidth users and to reduce the wasted server
bandwidth in case the user does not view the video clip to the end. The
buffer doubling is obviously binary search for the correct buffer size
and storing per site is just required because bandwidth to different
services could very a lot.

The above logic could be appended with an another rule:

3. In case of successful video clip playback (no buffer underrun during
a full playback of a video clip), multiply downloadBufferTarget by 0.95
and store this as the new default for this site.

This would cause slight buffer underruns in the long run (about 5%
change for visible buffer underrun for random video clip) but any
underrun would always double the buffer. Having this additional rule
would allow decreasing the downloadBufferTarget in case the network
bandwidth has been improved. It could make sense to save this multiplier
per site as well and tune this multiplier towards 1.0 in the long run
for any given site.

PS. It could make sense to save these preferences per {site, connection
method} tuple in case one often uses e.g. 100 Mbps LAN connection and a
3G mobile data connection. Both cases should converge to different
downloadBufferTarget values for any give site (e.g. youtube).

-- 
Mikko

Received on Tuesday, 18 January 2011 02:46:23 UTC