RE: Interactive Television

Scott Bradley Wilson, The educational use cases are of interest.  http://cosmolearning.com is a good example website. I agree that WebVTT and JavaScript can be of use to web developers. Your parodic strawman argument, however, was inappropriate, did not meet or exceed my expectations of this scientific forum, and was legally inaccurate. In the United States of America, a relevant law is fair use.  We technologists in the United States are versioning software and ergonomics to facilitate fair use scenarios.   Kind regards, Adam Sobieski Subject: Re: Interactive Television
From: scott.bradley.wilson@gmail.com
Date: Fri, 12 Aug 2011 09:47:57 +0100
CC: silviapfeiffer1@gmail.com; public-web-and-tv@w3.org
To: adamsobieski@hotmail.com



These are good use cases, I'm particularly interested in educational ones (e.g. using tracks to include assessment feedback on student video, and the kind of detailed navigation you describe here). 
However I think they can be achieved using the existing WebVTT and <video> specs - for example, parsing the WebVTT to generate a navigation view, or mining it for search terms, or generating a graph or concept map. None of this would require a new spec, but would be a great demo. 
Some of the other use cases seem more like linked data - in which case perhaps another solution would be to embed semantic links in the track:
1
01:23:45.678 --> 01:23:46.789
Professor Sobieski: "This graph shows the likelihood of any W3C spec being blocked by silly patents"<rdfs:seeAlso rdf:resource="http://www.xyz.com/blog/resources/graph27"/>
S
On 12 Aug 2011, at 08:59, Adam Sobieski wrote:Silvia,
 
Some scenarios that interest me are searchable and navigable transcripts such as illustrated athttp://www.cspan.org.  Video tracks can provide data for JavaScript to make use of DHTML for user interfaces for navigation within and between videos.
 
For educational video websites, I would like transcript-style and even outline-tree-based navigation.  Crowdsourced collections of video can become more encyclopedic and otherwise enhance rapidized research and discovery.

Another use case is video blogging.  In addition to more intuitive video blogging and post-production software for end users, I would like for end users to be able to make use of ubiquitous multimedia content selection with extensible context menus to comment to, respond to and interact with one another about arbitrary selections of multimedia content.

Another use case pertains to video format presentations of publications and reports for general, scientific, scholarly and business communication.  For many communication needs, document elements like charts, diagrams, equations, figures, graphs, tables, and so forth, can be in videos while also functional objects for computing.  These video document objects can also be interactive. MathML3, presentation and content layers, can facilitate more robustly interoperating mathematical objects from within and between videos.

Furthermore, it is possible, that drag and drop could be facilitated to and from videos.  A speaker in a video could indicate scientific equations while equations appeared on the screen, and then users could drag and drop those mathematical objects into applications where they robustly interoperated from within and even between videos.
 
 

Kind regards,

Adam

 
> From: silviapfeiffer1@gmail.com
> Date: Fri, 12 Aug 2011 10:56:25 +1000
> Subject: Re: Interactive Television
> To: adamsobieski@hotmail.com
> CC: public-web-and-tv@w3.org
> 
> Hi Adam,
> 
> what you are suggesting is already possible with the current
> specification of <track> and a @kind=metadata and the xml or json
> included in a WebVTT file's cues. We just need to wait until the
> browsers have actually implemented and released it.
> 
> That's why I was more curious to find out if we have any more specific
> application needs that actually require standardisation and suggested
> analysing them.
> 
> I think what Bob is doing sounds very interesting in this context.
> ETV, ads and parental control are indeed interesting use cases. Bob:
> do you have more information on these and specification proposals?
> 
> Cheers,
> Silvia.
> 
> On Fri, Aug 12, 2011 at 1:33 AM, Adam Sobieski <adamsobieski@hotmail.com> wrote:
> > Silvia Pfeiffer,
> >
> > With regard to the HTML5 video track ideas, I disagree with the indicated
> > approach of analyzing exact needs and use cases first; the benefits that the
> > particular concepts bring to HTML5 video tracks are sufficiently broad and
> > general use that an exact needs and use cases approach seems suboptimal.
> > The best option is both XML and JSON.  Browser teams already have both XML
> > and JSON parsers and libraries handy and some data structures and heuristics
> > might be reusable between XML and JSON implementations for the described
> > <track/> object.  I like the extensiblity of XML and what I like about the
> > JSON approach is the convenient JavaScript syntax in the callback functions.
> >
> > I previously forwarded the XML ideas to the HTML5 working group.  Perhaps
> > you can send an email describing the JSON <track/> idea to the HTML5 video
> > working group.
> >
> > Annotation as the kind for post-produced overlays sounds worthwhile.  I
> > agree that exploring use cases on that makes sense.  That could include an
> > automation of some DHTML premises and perhaps some XAML concepts.
> >
> >
> > Kind regards,
> >
> > Adam Sobieski
> >
> >
> >> From: silviapfeiffer1@gmail.com
> >> Date: Thu, 11 Aug 2011 17:08:33 +1000
> >> Subject: Re: Interactive Television
> >> To: adamsobieski@hotmail.com
> >> CC: scott.bradley.wilson@gmail.com; public-web-and-tv@w3.org
> >>
> >> On Thu, Aug 11, 2011 at 4:35 PM, Adam Sobieski <adamsobieski@hotmail.com>
> >> wrote:
> >> > Hello Silvia,
> >> >
> >> > I like the idea about a new kind or kinds, possibly "xml" and/or "json".
> >> > Those could be catchalls for usage scenarios beyond the other kinds of
> >> > subtitles, captions, descriptions, chapters and metadata. Another
> >> > possible
> >> > kind is outlines which resembles chapters.
> >>
> >> Metadata is already a catch-all. I think we first need to analyse what
> >> exact needs / use cases we have before making a decision.
> >>
> >>
> >> > Your example about DHTML overlays with hyperlinks sounds interesting;
> >> > DHTML
> >> > overlays are possible wherever text and graphics presently occur atop
> >> > video
> >> > from video post-production techniques and new enhanced features are
> >> > possible
> >> > with hypertext. Video post-production techniques can make use HTML5
> >> > video
> >> > capabilities, DHTML and overlays and so doing might provide for entirely
> >> > new
> >> > features.
> >>
> >> We should then consider asking for a @kind=annotation and specify this
> >> use case some further. Also JSON may not necessarily the best solution
> >> for this use case. We should experiment with JavaScript first. This
> >> way we can identify the best possible solution.
> >>
> >> > I think that more kinds alleviates a misunderstanding that under
> >> > discussion
> >> > was some sort of alternative to WebVTT. WebVTT seems apt for its set of
> >> > kinds and could even be of use in convergence scenarios such as digital
> >> > cable. New kinds for HTML5 video tracks, "xml" and/or "json", can allow
> >> > for
> >> > more Flash-like functionality with HTML5. By specifying an XML format
> >> > with
> >> > at least attributes for temporal intervals, any XML that makes use of
> >> > that
> >> > XMLNS could include time synchronization data that <track/> expects.
> >>
> >> Yes, WebVTT is designed to be a general container for
> >> time-synchronized data. But as I said: we should analyse the use cases
> >> in more detail and come up with better means of semantically labelling
> >> the included data than by format.
> >>
> >> > With regard to HTML5 video, it seems that new kinds are exciting to
> >> > discuss.
> >>
> >> Very much so!
> >>
> >> Cheers,
> >> Silvia.
> >

 		 	   		  

Received on Friday, 12 August 2011 20:54:19 UTC