Wiki update

Hi All,

I’ve made a number of changes to the Wiki section that discusses implementation alternatives:

  1.  I propose that what constitutes a TextTrack be defined for all media resource formats and a draft definition is provided for MPEG-2 TS. This is compared to earlier versions which state that anything not a audio/video stream rendered by the UA should be treated as a TextTrack. The reason for this proposed change is that, at least in MPEG-2, there are a lot of stream types that are neither video, audio or legitimate TextTracks. An implication of this proposal is that audio and video streams not rendered by the UA are not made available as a TextTrack. This seems to make sense as no use case has been identified for doing this and it might be quite a burden on the UA to create Cues at a video stream rate. Does anyone have use-cases where video/audio tracks not rendered by the UA shouldbe made available to the app as a metadata TextTrack?
  2.  A third implementation alternative is proposed – just use the existing inbandTrackMetadataDispatchType for exposing track metadata. This would really simplify the need for a companion spec to HTML5. This, of course, limits the availability of track metadata to TextTracks kind==‘metatdata’. However, its not clear why a Web app would need track metadata for tracks rendered by the UA.
  3.  The implementation alternatives have been rearranged to highlight what’s common across all of the alternatives – basically rules for creating video, audio and texttracks by media resource type. The only thing that’s really different in the alternatives is how metadata is exposed.

What do folks think about the #1 and #2? I am especially interested to know if there are use cases where #2 would be too limiting. I find myself leaning towards #2 as the alternative to pursue, unless there is a use case that’s not supported.

Regards,
Bob

Received on Thursday, 9 January 2014 16:45:43 UTC