Minutes from the Media Subteam Teleconference on 18 May

Minutes from today's HTML-A11Y Task Force Media Subteam are provided
below in text and are available as hypertext at:

http://www.w3.org/2011/05/18-html-a11y-minutes.html


   W3C

                                                           - DRAFT -

                                          HTML Accessibility Task Force Teleconference

18 May 2011

   See also: IRC log

Attendees

   Present
          JF, silvia, Janina, Bob_Lund, Eric

   Regrets
          Judy_Brewer

   Chair
          Janina_Sajka

   Scribe
          JF

Contents

     * Topics
         1. Identify Scribe
         2. Clean Audio: Adjusting our Kinds and our docs
         3. Paused Media: Continuing Discussion
     * Summary of Action Items
     __________________________________________________________________________________________________________________

   <trackbot> Date: 18 May 2011

   <scribe> scribe: JF

   <scribe> agenda: this

Identify Scribe

   <scribe> scribe: JF

Clean Audio: Adjusting our Kinds and our docs

   JS: this is what Sean discovered about Clear audio - it's clean Audio

   once we made this discovery, we were able to find documentation

   suggests thta we change our docs as well as list of @kind

   initial premise is in a 5:1 setting, the center channel is dedicated to dialog, thus if we can reduce/remove the other
   channels it will allow for better comprehension

   seems that this is UK based, although picked up by ETSI

   see: www.etsi.org

   JS: seems that Silvia has already corrected the wiki

   we need to discuss changing our @kind for this

   is there any dissention?

   JS: less clear is the implication supporting this understanding of clean audio

   with respect to controls and our user-requirements

   SP: looked at clean audio as a track-level problem

   we would have a track that hadd all audio content, and a second track with only the speech audio

   however when analized this there are actually 2 ways of doing and delivering this

   one is a 3 channel - left, right and speech

   that remains one audio track (interleaved)

   5: 1 audio is similar as well

   but now they are replacing the center channel with just speech, which allows people to increase the vlume of center to
   replicate clean audio

   SP: question is how to get control of the center track

   EC: additional problem and serious problem, is that most container formats cannot 3 or 5 channel audio

   it can only be stereo or mono

   for example mpeg 1 and mpeg3

   very little today is encoded in anything but stereo

   SP: the 3 relevant formats can (I believe) support multi-track formats, but therer are likely few files like that in
   the wild

   and it pushes the problem into a larger area, and one that we cannot handle with JavaScript

   prefer we create a new @kind called 'speech" which would be an individual speech track

   EC: think this is the right way forward as well

   SP: the API we have shold handle this. the ETSI spec is just a spec, don't believe there is any implementation in the
   wild to date

   (Eric and Janina are provisional agreement, bob doesn't have an informed opin)

   SP: suggest we put this on the list, see if there is any opposition to @kind of speech, and if no opposition then
   change the bugs to reflect this change

   JS: perhaps invite somebody with experience with Clean audio to speak with us

   SP: would LOVE to have somebody with experience to consult with us

   if janina can find somebody that would be highly appreciated

Paused Media: Continuing Discussion

   SP: does it make sense to start a new thread?

   (Question was about Clean Audio thread on the list)

   JS: Paused media, where are we?

   there has been a lot of discussion on list, including some new issues bob has introduced

   not clear where to start

   SP: an interesting week on that topic

   we have trickled out a lot of thoughts

   in my mind what we have started doing and where we are is very iunclear

   started a list a few days ago, and should be updated on the dimensions we are trying to grapple with

   one dimension is whether we have a graphical or text only browser

   may be a side issue, but spills into this discussion as a use-case

   the other is the video player design, how do we expose that textually?

   to the accessibility API?

   then what to do with the representative frame?

   relates back to the missing autoplay attribute

   another use-case is how we represent the video content short description, a longer description and full transcript

   and then the still image need for short and long descriptions

   SP: trying to keep the use-case clear and separate without looking at 'solutions' at this time

   <janina> scribenic: janina

   <janina> jf: If I'm understanding Silvia correctly, then I agree with this.

   <janina> jf: Believe it comprehends the distinctions I've been trying to represent all along.

   <janina> jf: If yes, think we've captured the requirements.

   <silvia> summary is also at http://lists.w3.org/Archives/Public/public-html-a11y/2011May/0367.html

   <janina> ec: Well, I may disagree

   <janina> jf: Tried carefully not to go to solutions but to state the needs, the user requirements

   <janina> jf: Don't know how to answer "don't agree" when it's about user reqs ...

   <janina> sp: Eric, what do you disagree with

   <janina> ec: separate representation of the first frame

   <janina> sp: that's the next step, i haven't gone there yet

   <janina> sp: Next step would be to group things together, and create cnadidate ml

   <janina> sp: trying to keep emotion out of this and find the best way to ml, based on good req understanding

   <janina> jf: looking at the list in email -- graphical browser -- but think textual description for still images is
   missing

   <janina> jf: if no visual text, we're not obligated to supply text to AT

   <janina> jf: But, if graphic, we neet the graphic alternatives, short and long

   <janina> jf: so what happens when there's no poster summary, but a poster?

   <janina> jf: also agree that whether a first frame or external image is immaterial

   <janina> ec: thought we were proposing descriptions ov video to supply to at for those who couldn't see the image

   <janina> jf: if there's imagery on screen, it requires both a short and long description mechanism

   <janina> jf: image and imagery are substantively the same for this purpose

   <janina> janina: suggesting the short and long descriptions are fully in sync with longstanding wcag guidance

   <janina> sp: do we need separate descriptions for video and the paused representation

   <janina> jf: yes, because they describe different things

   <janina> sp: why not one follow the other?

   <janina> jf: we have the video element, it's the player chrome. it needs a11y name, etc., and we put things in this
   player including the video/audio when we press play

   <janina> jf: the other thing inside that container, when the video isn't playing is a still image which needs
   description

   <janina> sp: but if it was presented in the summary, does it need to be repeated?

   <janina> jf: depends on how presented to the user

   <janina> jf: what happens when i pause at 4 secs? poster, no.

   <janina> jp: not that we need to describe every frame, but it may be useful to describe certain internal video frames

   <janina> sp: so i want to go back to the use case of chapters, subchapters, etc

   <janina> jf: except, this may be a useful way to handle that

   <janina> sp: we remove an entire level of complexity if we can sequentially satisfy this

   <janina> jf: want to check with several people for feedback

   <janina> jf: question of what happens when the poster frame doesn't actually say anything about the video

   <janina> sp: You can still add that text.

   <janina> ec: by definition whatever is in the first frame is related to the video

   <janina> ec: if we decide we need a second representation of that frame, we also need to decide when at presents that
   data

   <janina> ec: e.g., if autoplay is on, you never get there

   <janina> sp: which why i thought it should be regarded as background to the video

   <janina> jf: but it's not a background image, background image has certain meanings, and this isn't it

   <janina> sp: yes, same problem with button

   <janina> jf: probably not likely though

   <janina> sp: with css?

   <janina> janina: pf is looking at it, rather behind on our css coordination

   <janina> jf: the issue with autoplay is one key reason why i was pushing child element

   <janina> sp: but it also doesn't solve the problem

   <janina> jf: but it's a child of video in the dom, whether or not it takes an external image

   <janina> sp: not technically possible

   <janina> jf: not so sure, it can take on properties, e.g. description

   <janina> sp: what to do in the fallback case? when it doesn't understand video element

   <janina> sp: a broken image in the browser

   <janina> jf: it's not an image element, it's a new element, so older browser that don't understand video won't
   understand this one either

   <janina> ec: that frame is only displayed until video is started, and never again

   <janina> jf: users wfor whom this matters will all turn autoplay off

   <janina> janina: absolutely

   <janina> jf: so users for whom this matters will see all the dexcriptions before starting the video

   <janina> janina: what about when the video is stopped and the page reloaded

   <janina> ec: it's reset

   <janina> jf: is that wrong?

   <janina> ec: just don't understand why you need to separately describe something that's only there as a standin until
   the video starts

   <janina> sp: we need to solve the problem that covers both circumstances, first frame, chosen frame, or external image

   <janina> sp: why not a div or p inside the frame

   <janina> jf: no, because no semantec meaning

   <janina> jf: are you saying the description of the static image is the same as the description of the video?

   <janina> sp: i think it's one thing that goes into the default fallback

   <video

   src="file.mp4"

   poster="file.png">

   <p> A Clockwork Orange Trailer</p>

   <p> (A short description of what the sighted user sees) A clockwork orange poster</p>

   How do I provide the long textual description that is 4 paragraphs long?

   </video>

   <silvia> <video src="file.mp4" poster="file.png" aria-describedby="posteralt videosummary">

   <janina> jf: remember short description is read auto; but the longer description is only invoked by the user

   <silvia> <p id="videosummary">A Clockwork Orange Trailer</p>

   <janina> sp: we'll always need to go to a link for longder; which is where transcription came in

   <silvia> <p id="posteralt">A clockwork orange poster</p>

   <silvia> </video>

   <janina> jf: but the longer description is not part of the transcript

   <janina> jf: also the id attrib doesn't map to a11y apis

   <janina> sp: can we focus just on short alts for now?

   <janina> jf: i'm already seeing you have children of video, which is correct, imho

   <janina> sp: nice that video gives a way to hide text on page without hacks

   <silvia> <video src="file.mp4" poster="file.png" aria-describedby="posteralt videosummary">

   <silvia> <p id="videosummary">A Clockwork Orange Trailer (<a href="transcript.html">Transcript</a>)</p>

   <silvia> <p id="posteralt">Poster frame is a clockwork orange movie poster (<a
   href="transcript.html#posterlongdesc">long description</a>)</p>

   <silvia> </video>

   <silvia> actually, here's an even better one:

   <silvia> <video src="file.mp4" poster="file.png" aria-describedby="posteralt videosummary"
   transcript="transcript.html">

   <silvia> p id="videosummary">A Clockwork Orange Trailer (<a href="transcript.html">Transcript</a>)</p>

   <silvia> <p id="posteralt">Poster frame is a clockwork orange movie poster (<a
   href="transcript.html#posterlongdesc">long description</a>)</p>

   <silvia> <p><a href="file.mp4">Download the video file</a></p>

   <silvia> </video>

   <janina> sp: will allow a right click on transcript

   <janina> jf: this feels right

   <janina> jf: want to review aria roles,to make sure we can get that in

   <janina> jf: think this might work, will check around with several people

   <janina> sp: we should make sure this solves all our reqs

Summary of Action Items

   [End of minutes]
     __________________________________________________________________________________________________________________


    Minutes formatted by David Booth's scribe.perl version 1.136 (CVS log)
    $Date: 2011/05/18 23:31:19 $
     __________________________________________________________________________________________________________________

Found Scribe: JF
Present: JF silvia Janina Bob_Lund Eric
Regrets: Judy_Brewer
Found Date: 18 May 2011
Guessing minutes URL: http://www.w3.org/2011/05/18-html-a11y-minutes.html

-- 

Janina Sajka,	Phone:	+1.443.300.2200
		sip:janina@asterisk.rednote.net

Chair, Open Accessibility	janina@a11y.org	
Linux Foundation		http://a11y.org

Chair, Protocols & Formats
Web Accessibility Initiative	http://www.w3.org/wai/pf
World Wide Web Consortium (W3C)

Received on Wednesday, 18 May 2011 23:36:11 UTC