- From: Thierry MICHEL <tmichel@w3.org>
- Date: Sat, 13 Aug 2005 00:32:17 +0200
- To: "Glenn A. Adams" <gadams@xfsi.com>
- CC: Al Gilman <Alfred.S.Gilman@IEEE.org>, W3C Public TTWG <public-tt@w3.org>
Chris, If you require any further follow-up please do so, and if you are satisfied with the TTWG response, please acknowledge it by replying to this mail and copying the TTWG public mailing list: public-tt@w3.org Regards, Thierry Michel Glenn A. Adams a écrit : > >Dear Al, > >Thank you for your comments, [1] through [4], on the DFXP Last Call >Working Draft. The TT WG has concluded its review of your comments and >has agreed upon the following responses. > >If you require any further follow-up, then please do so no later than >September 1, and please forward your follow-up to <public-tt@w3.org>. > >Regards, >Glenn Adams >Chair, Timed Text Working Group > >************************************************************************ > >Citations: > >[1] http://lists.w3.org/Archives/Public/public-tt/2005Apr/0024.html >[2] http://lists.w3.org/Archives/Public/public-tt/2005Apr/0036.html >[3] http://lists.w3.org/Archives/Public/public-tt/2005Apr/0038.html >[4] http://lists.w3.org/Archives/Public/public-tt/2005Apr/0043.html > >************************************************************************ > >Comment - Issue #6 [1]; 11 Apr 2005 10:50:18 -0400 > >Thank you for offering us the chance to refine our input for a little >longer. > >Since you are meeting face-to-face, let me offer the following thoughts >of an individual and preliminary nature. > >Key thoughts: > >- if the user can receive the content on a programmable device, we >need to develop the [Web] distribution options and content >constraints [with format support] to serve alternative (adaptive) >presentation for individuals. > >- there is going to be a lot of content that sees the light of >intra-broadcast-industry pipelines in DFXP encoding. Deferring >adaptive use to the availability of an AFXP spec is not necessarily >an acceptable policy from the standpoint of disability access. > >While the DFXP specification may not define a CPE player for the format >per se, there is still reason to consider use cases for people with >disabilities which require an alternate presentation of the material. > >Just because there is no anticipation that the DFXP would be used >directly in mass-market set-top-box processes, it doesn't mean that >there aren't authoring-time requirements on the content that should >be supported in the intermediate form i.e. the DFXP. > >Making the DFXP available to a transcoder of the user's choice is >one way that the content encoded in the DFXP could be served to >a person with a disability requiring alternate presentation. > >Or the content could be browsed offline using a mainstream XML reader >and a schema-aware assistive technology. > >[start use scenario] > >Here is a scenario sketch to illustrate what I mean: > >There is a meeting held by videoconference over a corporate extranet. >To serve strategic partners in other countries and technology >platforms, Internet technologies are used including subtitles generated >in real time and distributed using DFXP as an intermediate form. > >One of the people whose job requires interacting with the content of >the meeting is Deaf and blind. So a complete log of the meeting is >kept for this participant's offline review. > >supposition: The DFXP, as an XML format, is the dataset of choice >on which to base this person's browse of what transpired in the >session. Not just the formal statement of the decisions that were >reached, but the dialog that led to the decisions. > >This would mean that the DFXP would be spooled and archived with >the audio and video. Quite possibly there would be a SMIL wrapper >created as a replay aid. But the deaf-blind user would be reviewing >this through a refreshable Braille device and primarily reviewing the >timed text as transcript. > >Note that in interactive Braille as the delivery context, >right-justification >and color are not appropriate as speaker-change cues. So we >need the speaker-change semantics available, separable from any >particular visual-presenation effects. DFXP gives the author the >capability to express this, but will the information be there in >instances? > >So regardless of whether a collated transcript is created by a >transcoder, or the several text streams are browsed as is with an >adaptive user agent, the availability of speaker identification in >the DFXP instance, the working base for the adapted use, or at a >minimum speaker-change events if the identity of the speakers was not >captured, would be important in affording this user comparable quality >of content as those receiving the same information as real-time >display integrated with the video and audio. > >[end use scenario] > >This is just to illustrate that there are people with disabilities for >whom >the introduction of something like the DFXP into the content pipelines >of broadcast happenings reflects an opportunity that should not be >wasted to raise the level of service and lower the cost of delivering >that service. > >In particular, the use cases for adapted presentation do not necessarily >presume that the DFXP would be pushed to all consumers in the >broadcast bundle. The distribution protocol might be on an ask-for >or 'pull' basis. And the user interaction might be in non-real-time >after the fact and not at speed. > >But the non-availability of the AFXP format as a "source in escrow" > >format for adapted uses means that the user needs the DFXP that >gets produced to be as fit an adaptation basis as we can make it. >This will be true while the AFXP is undefined, and will still be true >for those situations where a copy of the DFXP can be obtained >and a copy of a standard, XML source for that content cannot. The >latter is likely to be common even after the AFXP has been >specified by W3C. > >Thank you (the whole group) for bringing this important technology >this far. Best wishes for your meeting. > >Response: > >Thank you for your comments. > >************************************************************************ > >Comment - Issue #7 [2]; 13 Apr 2005 10:23:29 -0400 > >The usual approach in W3C is to use consensus public formats as a >pivot point so that the author can understand the binding of the >content schema to the lingo of the domain sourcing the content, and >the assistive technology or device independence specialist can >understand how to map the content schema to the presentation >possibilities of one or another delivery context. The content schema >is consolidated through an inter-community negotiation; while the >pool of people engaged in the negotiation need to cover the >stakeholding domains of activity, nobody has to become an expert in >both/all of them. > >http://www.w3.org/2004/06/DI-MCA-WS/ > >On the other hand, with the Semantic Web the W3C gives us an alternate >approach with less reliance on standard formats and more reliance on >metadata. And the WAI seeks creative solutions using any applicable >technology, not simply rote cant. > >However, a metadata approach would still require that the content >sourcing >activity a) capture and be prepared to share key information such as >speaker identity, where readily achievable, and b) explain the terms >in the way *they* are using [whatever format they are using as the >source or editable form] in terms of well-established public-use >references. The later is a schema reconciliation or data thesaurus. >[There is no policy-free solution, AFAIK.] > >The avenue of amelioration that we haven't touched on specifically >has to do with the CR checklist. We should be looking at what >concrete example-use activities during CR would illuminate the issues >we have been discussing so as to make it easier to come to consensus >that the DFXP does about what it should in these directions. > >Response: > >Thank you for your comments. > >************************************************************************ > >Comment - Issue #9 [3]; 21 Apr 2005 13:24:18 -0400 > >The following three information elements defined in your metadata >provisions[1] would appear to replicate capabilities available from >the Dublin Core[2]. > > > 12.1.2 ttm:title > 12.1.3 ttm:desc > 12.1.4 ttm:copyright > >Please consult with the Dublin Core Usage Board or some expert well >versed in their opinions to see if the information intended to be >conveyed by these three elements is adequately expressed by existing >Dublin Core terms. If there are expressions in terms of Dublin Core >terms that convey what you need to convey, please use them. > >Response: > >The TT WG believes that (1) it is important to define standard, >interoperable vocabular to express title, description, and copyright >information, and (2) that is important for DFXP, as a basic profile, >to maximize self-containment of vocabulary definitions and usage, >particular for expressing standard interoperable information; >therefore, the TT WG prefers to retain these vocabulary. Recognizing >that there will be some interest in use of DC, we will add informative >information that indictes the corresponding DC vocabulary that may be >addditionally used by authors to express this information. > >Finally, the TT WG has carefully reviewed the semantics and intended >use cases of the above three metadata elements and compared these with >similarly named items in the Dublic Core vocabulary. After this >review, we concluded that there is sufficient difference of usage and >intended semantics to retain these items in the TT AF metadata >vocabulary. The DC metadata vocabulary may be used along side this >vocabulary as desired by an author. > >************************************************************************ > >Comment - Issue #13 [4]; 25 Apr 2005 14:56:43 -0400 > ><background> > >Please refer to the 'ttm:role' attribute as defined in the curent >Timed Text DFXP specification. > >http://www.w3.org/TR/2005/WD-ttaf1-dfxp-20050321/#metadata-attribute-rol >e > >A caucus of WCAG and UAAG participants concluded that they might want >an explicit 'transcription' value for such a role. > >http://lists.w3.org/Archives/Member/w3c-wai-cg/2005AprJun/thread.html#35 > ></background> > ><comment> > >User agents need clear indications in the format of what text >corresponds to speech in some corresponding audio segment. This is >needed in order for the User Agents to satisfy UAAG Checkpoint 2.3. > >http://www.w3.org/TR/UAAG10/guidelines.html#tech-conditional-content > >We realize that the DFXP will most often be transcoded into another >format before transmission to the User Agent. However, in order for >the transmitted form to have this information, the distribution format >must be clear on this point for the transcoded results to convey the >right information. > >Is this information recognizable from the existing format as it >stands? If so, how? > >Or should the 'ttm:role' attribute have a value of 'transcription' >defined? > ></comment> > >Response: > >We are not certain we understand this comment. DFXP does not >intrinsically support a conditional content construct, such as >described by the DI Select working draft >(http://www.w3.org/TR/2005/WD-cselection-20050502/). > >Nonetheless, if we construe that this comment is essentially asking >for an additional standard enumeration value for the attribute >ttm:role of "transcription", then the response is: yes this value can >and will be added to the enumeration found in section 12.2.2 of the >DFXP Last Call WD. > >************************************************************************ > > > > >
Received on Friday, 12 August 2005 22:32:37 UTC