- From: Kristof Zelechovski <giecrilj@stegny.2a.pl>
- Date: Thu, 16 Oct 2008 19:53:09 +0200
Allowing client-side fragments reduces the number of connections but allowing server-side partial content is likely to reduce the volume of data. Those features are antagonists, both are needed and it is hard to tell when each one should be. The case of hyperlinked transcript would be better served with an analogue of IMAGEMAP AREA for cue ranges. It has been brought forward previously. The case of instant cue ranges on demand is an edge case. IMHO, Chris -----Original Message----- From: whatwg-bounces@lists.whatwg.org [mailto:whatwg-bounces at lists.whatwg.org] On Behalf Of Dr. Markus Walther Sent: Thursday, October 16, 2008 6:25 PM To: Eric Carlson Cc: whatwg group; Chris Double Subject: Re: [whatwg] video tag : loop for ever Eric Carlson wrote: > > On Oct 15, 2008, at 8:31 PM, Chris Double wrote: > >> On Thu, Oct 16, 2008 at 4:07 PM, Eric Carlson <eric.carlson at apple.com> >> wrote: >>> However I also think >>> that playing just a segment of a media file will be a common >>> use-case, so I >>> don't think we need "start" and "end" either. >> >> How would you emulate end via JavaScript in a reasonably accurate >> manner? >> > > With a cue point. > >> If I have a WAV audio file and I want to start and stop >> between specific points? For example a transcript of the audio may >> provide the ability to play a particular section of the transcript. >> > If you use a script-based controller instead of the one provided by > the UA, you can easily limit playback to whatever portion of the file > you want: > > SetTime: function(time) { this.elem.currentTime = > (time<this._minTime) ? this._minTime : > (time>this._maxTIme?this._maxTIme:time); } IMHO, using 'currentTime' and cue ranges is - while technically possible - a more cumbersome and roundabout way to delimitate a single audio interval than just using 'start' and 'end' attributes. I advocate keeping the simple way to do it, with 'start' and 'end', in the spec. Also, since you just showed how it can be implemented using cue ranges and currentTime, having a second, simpler interface (for the case of a single interval) should be cheap in terms of implementation cost, if you plan to implement the other one anyway. > I agree that it is more work to implement a custom controller, but it > seems a reasonable requirement given that this is likely to be a > relatively infrequent usage pattern. How do you know this will be infrequent? > Or do you think that people will frequently want to limit playback to > a section of a media file? Yes, I think so - if people include those folks working with professional audio/speech/music production. More specifically the innovative ones among those, who would like to see audio-related web apps to appear. Imagine e.g. an audio editor in a browser and the task "play this selection of the oscillogram"... Why should such use cases be left to the Flash 10 crowd (http://www.adobe.com/devnet/flash/articles/dynamic_sound_generation.html)? I for one want to see them become possible with open web standards! In addition, cutting down on number of HTTP transfers is generally advocated as a performance booster, so the ability to play sections of a larger media file using only client-side means might be of independent interest. -- Markus
Received on Thursday, 16 October 2008 10:53:09 UTC