Media Telecon Minutes for Wednesday 18 August

Minutes from today's HTML-A11Y Task Force Media Subteam are provided
below in text. They're also available in html from:


                                                           - DRAFT -

                                                       HTML-A11Y telecon

18 Aug 2010

   See also: IRC log


          Judy, Janina, John_Foliot, Eric_Carlson, silvia, Sean_Hayes




     * Topics
         1. Identify Scribe
         2. Actions Review
         3. User Requirements Status & Next Steps
         4. Synchronizing Asynchronous Alternative Media Resources Followup
         5. next meetings, confirm date/time, choose scribe
         6. WebSRT Overview
     * Summary of Action Items

   <scribe> agenda: this

   agenda CandidateGap Analysis: WebSRT; WMML, Etc.

Identify Scribe

   <scribe> scribe: janina

   jf: Emphasize that all user reqs are reqs, however what the tech implications are is what we might profitably

   jb: Expect my and Janina's edits to the user reqs doc by


   jf: We agree to drop the other two actions assigned to Judy. ...

   action-32: drop

   <trackbot> ACTION-32 Follow up w/ Gunnar Hellstrom on comprehensiveness of secondary signed channel requirements notes

   action-43: drop

   <trackbot> ACTION-43 Seek deaf-blind representation in requirements gathering process notes added

Actions Review

   <scribe> ACTION: judy to find location for ncam extended description demos [recorded in]

   <trackbot> Created ACTION-53 - Find location for ncam extended description demos [on Judy Brewer - due 2010-08-25].

   js: We have examples of extended descriptions from NCAM that are about 10 years old and we have permission to make them
   available to the W3C public.

   jb: I specifically asked for content we could share.

   js: I confirmed that

   jf: This is exciting, gives us something to illustrate what we're talking about.
   ... Also, we should note that Mozilla just announced smil support beginning next nightly build.
   ... Comment came from Chris Blizzard.

   jb: Q to Sean and/or Eric: Is this demo helpful to you?

   sh: I've built similar demo for my use. We can add as another data point.

   ec: Don't think it will give me anything I don't have now, but think it will be extremely useful going forward.

   <JF> Silvia: great to get these content files together not just for demos, but for testing as well

   <JF> for testing the funcitionality of the browsers

   <JF> Judy to follow up to ensure that Geoff's NCAM files can be used for that purpose

   <silvia> ACTION: judy to follow up that NCAM files can be used in HTML5 testbed [recorded in]

   <trackbot> Created ACTION-54 - Follow up that NCAM files can be used in HTML5 testbed [on Judy Brewer - due

   <JF> Janina will secure the files and temporarily host on her personal server

User Requirements Status & Next Steps

Synchronizing Asynchronous Alternative Media Resources Followup

next meetings, confirm date/time, choose scribe

WebSRT Overview

   sp: WebSRT is written so that it wouldn't prevent other formats' coexistence
   ... has tracks similar to our previous proposed tracks

   jf: are there significant difference in the track element?

   sp: we were grouping alternatives. This was deemed too complex. Sees all externals individually, and can enable any
   one, or all
   ... very flexible now, and i don't have a problem with it.

   jb: Are you saying it seems to address most of our user reqs?

   jf: Will it allow us to deliver our reqs?

   jb: It's a clarification.

   sp: Yes.
   ... supports external text tracks similar to ours;
   ... Whether it meets all, will take more analysis, think the answer now is "probably yes."

   ec: One place it does not meet our reqs is that it is currently restricted to text files, mainly because that's all the
   spec addresses at this point.
   ... Should also work for video and audio alternatives.
   ... I believe it's clear Ian was thinking about that.

   sp: He's solving the text stuff now.

   jb: Our requirements were explicit on external files?

   sp: yes

   jf: It's explicit that we support sign language translation, and as an external file

   <silvia> (PP-4) Typically, alternative content resources are created by different entities to the ones that create the
   media content. They may even be in different countries and not be allowed to re-publish the other one's content. It is
   important to be able to host these resources separately, associate them together through the Web page author, and
   eventually play them back synchronously to the user.

   jf: track element seems ok?

   sp: minor things, but mostly good

   jb: and we're still planning to create a matrix that allows to check item by item against our user reqs?

   [general agreement]

   js: Perhaps we should invite Ian to present WebSRT to us?

   [group response, yes, in the future -- perhaps severla weeks]sp: Next thing is a javascript api

   sp: we had such a proposal
   ... Ian's js more comprehensive than ours
   ... deals with integratin issues, includes dynamically created text tracks as well as internal and external text tracks
   ... Seems to be a requirement for this from some people.

   jf: Is that leveraging local storage? For later repurposing?

   sp: I suppose you could.

   sh: Who expressed this requirement?

   jb: I strongly support us capturing this to our reqs.
   ... Wondering if we should also capture real time video description

   sh: I'm still missing the connect that we inject via js ...

   jb: There's a common use of captioning in real time

   sp: I could ask how realtime captioning is envisioned on the what list
   ... there are things in websrt i find problematic, such as styling in the js
   ... what we have now is ian's first draft -- he usually waits several weeks before coming back to it and incorporating
   ... next, websrt rendering ...
   ... there's spec on how to render captions and subtitles, all that's addressed so far
   ... only rendering is over the top of the video
   ... i've started a discussion on this as this isn't adequate -- user should be able to define where the rendering

   jf: 3PlayMedia media player has a two panel approach, transcript on right panel, video on left
   ... word being spoken is highlighted in transcript

   sp: ian's point is that this is a specialized display and authors can easily create their own display

   jb: issue of caption positioning is a common concern.
   ... did we sufficiently capture positioning as user disposes?
   ... do we also have the highlighting?

   [general agreement to explicitly state a req on highlighting in captions, transcripts, text alternatives}

   sp: do we need text with audio?

   jf: we've mentioned this in the past in several use cases

   ec: i think this isn't quite right, you can't have caption in audio element, but you could sync text to the audio

   sp: as it stands, there's no rendered video port during audio playback, but we need to provide one in the case of
   captioning for audio

   ec: believe the current approach is put into a video element if you want captioning with audio

   js: it's important that we allow third party to provide and for the user to elect for captions. this wouldn't support
   changing from audio to video element

   jf: perfect use case here at stanford. it happens all the time

   ec: but at each step of that content development chain, somebody has to update something on the web
   ... the person who makes the content available, whether original author or third part, they have to update a web page
   to point to those files
   ... party providing caption needs to wrap the mpeg href in video element on the page where the captions are provided.
   the source page needn't be changed.

   sh: so, how do you pause audio if no visual rendering?

   sp: there's a controls attrib
   ... turning on controls, there's something rendered.

   sh: we should also have visual if i add tracks

   jf: if audio doesn't have track, it needn't display. but as soon as it has track, there should be display region

   sp: i think the logic should be different ... should be based on controls

   ec: if we depend on ctrols, so that displaying controls renders captions, should not also video tracks inside the file
   with visual media should also show?

   sp: don't think so, it's put into audio element for a reason. controls should bring up controls, and caption because
   it's a visual descript of the audio

   ec: don't agree.

   js: do we have enough to meet next week?

   jf: i know silvia wants to develop this further.

   [agreement to meet next week]

   jf: a few minutes to look at controls ...

   sp: perhaps eric and i can discuss of this on list?

   ec: certainly

   sp: last is the websrt format itself, not enough time today, perhaps next week?
   ... pros and cons of the file format itself ...

   jf: also need a serious look at ttml and at smil3
   ... as has been pointed out svg is using subset of smil, and as I mentioned, mozilla is about to support smil
   ... we should give all these a fair analysis and review

   jb: we need to not just take the one we land on first -- we should schedule our talks through the others, so we're
   complete in our review
   ... would like to know more about mozilla sml support

   jf: absolutely

   sp: if not enough people next week, perhaps we can invite ian for an intro

   jf: sean, any thought of who to invite for ttml?

   sh: i've been working on a doc mapping ttml -- should be ready in about a week.

   [agreement that we could invite ian as soon as sep 8]

   [we have sep 1 to discuss mapping matrix]

   sounds right, silvia


   rrsagent make log public

Summary of Action Items

   [NEW] ACTION: judy to find location for ncam extended description demos [recorded in]
   [NEW] ACTION: judy to follow up that NCAM files can be used in HTML5 testbed [recorded in]

   [End of minutes]


Janina Sajka,	Phone:	+1.443.300.2200

Chair, Open Accessibility	
Linux Foundation

Chair, Protocols & Formats
Web Accessibility Initiative
World Wide Web Consortium (W3C)

Received on Friday, 20 August 2010 03:45:43 UTC