W3C home > Mailing lists > Public > public-web-and-tv@w3.org > August 2011

RE: Interactive Television

From: Adam Sobieski <adamsobieski@hotmail.com>
Date: Fri, 12 Aug 2011 06:44:26 +0000
Message-ID: <SNT138-W37781D3A8B3C0F45A6F35BC5250@phx.gbl>
To: Bob Lund <b.lund@cablelabs.com>
CC: <public-web-and-tv@w3.org>

Bob,

Sounds good.  I've made up my mind about both XML and JSON being useful to have in a comprehensive general purpose <track/>; that's what I intended to indicate.  I think that both XML and JSON can be of use to web developers.

The described syntax on <track/> and events on TextTrack can be also implemented as a callback for TextTrack's onload that iterates setting a callback on each TextTrackCue's onenter and onexit, or a function that also iterates each with a callback.  The indicated syntax of something like "<video ...><track kind="metadata" type="xml/temporal" onplayheadenter="callback1" onplayheadexit="callback2" src="file.xml"/></video>" resembles per track events functionally equivalent to adding callbacks to each TextTrackCue's onenter and onexit.  Without regard to the syntax, for example {onplayheadenter, onplayheadexit}, {onentercue, onexitcue} or {onenter, onexit}, the premise is that events can exist on TextTrack that are equivalent to adding callbacks to each TextTrackCue's events.  Implementationally, a JavaScript function might receive and handle data from each cue identically, or a set of functions might map to cues.  One use case for track events, which, again, map to the track's cue's events, is for scenarios where streaming tracks might exist.
 
 

Kind regards,

Adam
> From: B.Lund@CableLabs.com
> To: silviapfeiffer1@gmail.com; adamsobieski@hotmail.com
> CC: scott.bradley.wilson@gmail.com; public-web-and-tv@w3.org
> Date: Thu, 11 Aug 2011 10:49:08 -0600
> Subject: RE: Interactive Television
> 
> 
> 
> > -----Original Message-----
> > From: public-web-and-tv-request@w3.org [mailto:public-web-and-tv-
> > request@w3.org] On Behalf Of Silvia Pfeiffer
> > Sent: Thursday, August 11, 2011 1:09 AM
> > To: Adam Sobieski
> > Cc: Scott Wilson; public-web-and-tv@w3.org
> > Subject: Re: Interactive Television
> > 
> > On Thu, Aug 11, 2011 at 4:35 PM, Adam Sobieski
> > <adamsobieski@hotmail.com> wrote:
> > > Hello Silvia,
> > >
> > > I like the idea about a new kind or kinds, possibly "xml" and/or
> > "json".
> > > Those could be catchalls for usage scenarios beyond the other kinds of
> > > subtitles, captions, descriptions, chapters and metadata. Another
> > > possible kind is outlines which resembles chapters.
> > 
> > Metadata is already a catch-all. I think we first need to analyse what
> > exact needs / use cases we have before making a decision.
> > 
> 
> We've prototyped several use cases that will be used by many commercial video providers. The first three rows of the table found in the open discussion here http://www.w3.org/2011/webtv/wiki/MPTF/MPTF_Discussions have been prototyped using metadata <track>. ETV is the ability to synchronize Web content with media resources using signaling messages associated with the media resource. Ad-insertion is client side targeted advertising and content advisories are used by parental control systems. In addition to out-of-band <track>, we also need to consider sourcing in-band tracks, especially for long-lasting media resources such as scheduled video channels. The questions that arise in this case are:
> 
> 1) How is the data represented in a particular media transport.
> 2) How does it get mapped to <track>.
> 
> The columns of the table are a first attempt to consider media transports of interest - there may be others. The first two columns reflect when the signaling messages come inband in the media stream. The remaining columns cover container formats that support additional text/application tracks. While these are out-of-band with respect to the audio/video, they are in-band from an HTML5 <track> sourcing perspective.
> 
> We are going through the analysis that Sylvia suggests and plan to discuss in the Web and TV MPTF. It will be good to get additional points of view on both the application space and transport formats of interest.
> 
> Bob
> 
> > 
> > > Your example about DHTML overlays with hyperlinks sounds interesting;
> > > DHTML overlays are possible wherever text and graphics presently occur
> > > atop video from video post-production techniques and new enhanced
> > > features are possible with hypertext. Video post-production techniques
> > > can make use HTML5 video capabilities, DHTML and overlays and so doing
> > > might provide for entirely new features.
> > 
> > We should then consider asking for a @kind=annotation and specify this
> > use case some further. Also JSON may not necessarily the best solution
> > for this use case. We should experiment with JavaScript first. This way
> > we can identify the best possible solution.
> > 
> > > I think that more kinds alleviates a misunderstanding that under
> > > discussion was some sort of alternative to WebVTT. WebVTT seems apt
> > > for its set of kinds and could even be of use in convergence scenarios
> > > such as digital cable. New kinds for HTML5 video tracks, "xml" and/or
> > > "json", can allow for more Flash-like functionality with HTML5. By
> > > specifying an XML format with at least attributes for temporal
> > > intervals, any XML that makes use of that XMLNS could include time
> > synchronization data that <track/> expects.
> > 
> > Yes, WebVTT is designed to be a general container for time-synchronized
> > data. But as I said: we should analyse the use cases in more detail and
> > come up with better means of semantically labelling the included data
> > than by format.
> > 
> > > With regard to HTML5 video, it seems that new kinds are exciting to
> > discuss.
> > 
> > Very much so!
> > 
> > Cheers,
> > Silvia.
> 
 		 	   		  
Received on Friday, 12 August 2011 06:45:04 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:57:07 UTC