- From: Thierry MICHEL <tmichel@w3.org>
- Date: Tue, 9 Mar 2004 15:11:38 +0100
- To: <public-tt@w3.org>
Minutes of TT WG Meeting on June 10-11, 2003, Kingswood Warren, UK Attendees Glenn Adams (XFSI, Chair, Scribe) [GA] Mike Dolan (Invited Expert) [MD] (Phone: Day 2 afternoon only) Markus Gylling (Daisy) [MG] Sean Hayes (MSFT) [SH] David Kirby (BBC) [DK] Thierry Michel (W3C) [TM] Dave Singer (Apple) [DS] Guests John Birch (Subtitling Systems) [JB] Regrets Brad Botkin (WGBH/NCAM, Invited Expert) [BB] Gerry Field (WGBH/NCAM, Invited Expert) [GF2] Geoff Freed (WGBH/NCAM, Invited Expert) [GF1] Markku Hakkinen (JSRPD) [MH] Erik Hodge (RealNetworks) [EH] Absent Patrick Schmitz (Ludicrum, Invited Expert) [PS] ************************************************************************ Agenda ************************************************************************ Day 1 (Wednesday, June 11, 2003) 0900 - 1030h Session 1 * Administrivia * Action Item Review * Work Plan Review * Specification Authoring Plan Elaboration 1030 - 1100h Break 1100 - 1300h Session 2 * System Model Elaboration * Use Cases Review and Issue Resolution 1300 - 1400h Lunch 1400 - 1530h Session 3 * General Requirements Review and Issue Resolution 1530 - 1600h Break 1600 - 1800h Session 4 * Content Requirements Review and Issue Resolution Day 2 (Thursday, June 12, 2003) 0900 - 1030h Session 1 * Styling Requirements Review and Issue Resolution 1030 - 1100h Break 1100 - 1300h Session 2 * Timing Requirements Review and Issue Resolution 1300 - 1400h Lunch 1400 - 1530h Session 3 * Animation Requirements Review and Issue Resolution 1530 - 1600h Break 1600 - 1800h Session 4 * Metadata Requirements Review and Issue Resolution ************************************************************************ Action Items Review ************************************************************************ Action: [GA] to contact someone from TEI and Speech Markup Group to review requirements for speech markup and aural properties. [DS] Asks whether the R221 items are data or metadata. [GA] Thinks they are data in our (TT) application domain. [DS] May want mechanism for attaching other metadata systems such as MPEG-7. [GA] We might want to define some element types specifically to serve as descriptive containers, or we may wish to use a more generic mechanism such as a role attribute or a <description/> subelement. [DS] Draws example: <text ...> <animation .../> <description> <mpeg7:foo .../> <rdf:RDF .../> </description> this is text </text> [JB] This would be very verbose in a subtitling file. More likely to perform indirection to apply descriptions, e.g., like a stylesheet. But this is debatable (whether style is descriptive metadata). [DK] Will see something like this re: our assignment of color based on speaker identification. [GA] Draws example: <cast> <speaker id="george" name="George"/> </cast> <speaker idref="george"> ... utterances ... </speaker> [SH] Feedback from MSFT is to use elements defined in SSML. [DK] Describes BBC "Assisted Subtitling" application. Start with existing script then drive speech recognition: force alignment. [SH] Wouldn't work for live though, presumably would have live subtitling. [DK] Correct, would have to use live subtitling for non-scripted shows. Uses automated tools to process script files (in MS WORD, .TXT, etc.). Extracts SCENE details and dialogue. [DK] Needs to identify shot changes using automated process on video in order to synchronize subtitling with viewer readability rules. [DK] Scene change is essential information, but doesn't need to know description of scene per se. Coloring is used in UK to distinguish actors by using different colors of text. BBC uses the same color for an actor throughout the program. Other subtitling companies use different standards. [DK] Uses leading dash or color to indicate change of speaker within single scene when multiple subtitles are on screen. [JB] May also use leading "#" instead of "-". [JB] Different rules apply often in subtitling for different languages as well different markets, e.g., captioning music. [SH] Asks if usability studies were done? [DK] Yes, but some time ago. [DK] Subtitle can cross shot changes within a single scene, especially during rapid shot changes. Rules allow delaying subtitle to next shot if shot is too short. [DK] After automated process, a human subtitler will post-edit to tune. For a 26 minute program, may have 2500 to 3000 words, which is too much for presentation. Must edit out words to make readable otherwise will be too fast. [DK] AML (Audio Markup Language) [BBC internal format] used for marked up speech. aml episode history change settings assigncolours reusecolours outputformat videostandard subitlestyle video file="..." shotchanges shot confidence, type (cut/fade), frame number characterlist character name, type (auto/manual) colour script scene title speaker name sync time (manually inserted timing marker) audio id, end, confidence, time text() speaker ... subtitles sub cumulativeStatus (for EBU format), justification, in/out time (video frame), index line index colour background, foreground #PCDATA or text id[ref] (references text in audio elements above) (text inserted directly if not in audio, e.g., " - " may also have text() that isn't timed, e.g., punctuation or non-spoken, but scripted text within subtitle line, may have manual color assignments done by subtitler, e.g., for shouts, etc. differences between audio and video: audio time=0 is start of audio video frame=0 is start of video (first I frame) but video time=0 is typically not start of audio uses an "audioearly" attribute to specify offset between video time=0 and audio time=0 [SH] Asks whether more general time codes might be used, e.g., SMPTE? [TM] See [2] for information on proposed external media marker format. [JB] Time code may be discontinuous. Need to support synchronization with an external time base, e.g., video time codes. There are issues regarding advert inserts. Main problem is that subtitling is like an adjunct. [GA] Summary on this action item: close due to subsequent decision to remove specific vocabulary in requirements document. Action: [GA] to send draft agenda for F2F by 06/06. Done. ************************************************************************ Work Plan Review ************************************************************************ Requirements Document Schedule May 15 - Working Draft for Public Review Aug 15 - Would be 3 months from 1st draft; need to publish new WD or note by this date. Resolution: Wait until Aug 15 to publish next WD or as Note. Group may discuss and decide either way over next few months time. Create internal WG draft revision with changes for review. ************************************************************************ Specification Authoring Plan ************************************************************************ * TT Framework Introductory material and common definitions. * TT Core Vocabulary * Content * Intrinsic Text * Extrinsic Text * Logical Flowed Text * Presentation Flowed Text * Non-Flowed Text * Hybrid Flowed and Non-Flowed * Hyperlinking * Embedded and Non-Embedded Graphics * Embedded and Non-Embedded Fonts * Descriptive Markup * Conditional Content * Style * Aural Styling * Visual Styling * Tactile Styling (? not currently in requirements) * Timing * Basic Timing * Time Containers * Sequential * Parallel * Exclusive * Animation * Scrolling * Highlighting * Transitions (Fading) * Metadata * DC * TT Core Document Types * Document Type 1 * Document Type 2 * Identify candidate document type usages in order to develop examples. * Profiles could be different document types but may also have semantic subsetting and extensions. [SH] Idea that we may have multiple document types, where one uses flattened timing and other uses hierarchical timing (par,seq). * TT Extension Vocabulary(ies) * TT Extension Document Type(s) Action: [GA] To construct table of features/requirements for which we need a volunteer to provide example(s). Also solicit examples based on existing systems, such as 3GPP, AML, SAMI, etc. ************************************************************************ System Model ************************************************************************ [SH] Need to describe how TT is effectively a media element that can be utilized by a SMIL (or other MM) presentation. [GA] Views use of SMIL in TT in terms of the conceptual framework it provides for timing/synchronization and perhaps animation, but that's it. [GA] Should draw figure showing TT as a separate track and have a SMIL excerpt that organizes tracks as a whole. [SH] How does style and timing integrate with larger elements, e.g., TT vs SMIL? [GA] SMIL specifies layout regions that become effectively the root container for TT presentation. [SH] Need to document this. [DK] TT needs to work with MXF/AAF. Need to construct example showing such use. ************************************************************************ Use Cases Review ************************************************************************ Action: [GA] Change second para of last note of S001 to change "hearing impaired" to "deaf and hard of hearing". Action: [GA] Add note under S001 saying that some communities use the term Subtitling to encompass both uses described here (i.e., captioning and subtitling per S000 and S001). I001: Should the name of this requirement be changed to Text Description of Video? Resolution: Change name of S002 to "Description"; add note explaining that when description is rendered by an audio track, that it is called "Audio Description". Also add note saying this is not intended to serve as metadata description. [DS] Does S003 cover "presented timed text", i.e., where the text is primary content and must be presented, e.g., credits on a film, etc? [GA] Yes, if we remove the first note. [DS] OK. Add note under S003 indicating use as "presented text", i.e., non-optional content. For example, this is like open subtitles/captions. Give example of credits, FBI warning, edited to fit your screen, broadcaster is having technical difficulty, rating labels/announcements, emergency warnings, etc. ************************************************************************ General Requirements ************************************************************************ [GA] Asks [MG] to describe R111 and R112. Asks about equivalent alternative applicability. [MG] Alternates could be text description of embedded graphics and audio representation of text serving as audio description. [DK] May need alt text for text, e.g., reading levels. [DS] Also may want to provide alternates for text that couldn't be presented, e.g., for unusual fonts or unusual characters. [MG] This is alternative of a different nature than addressed by WAI XML AG. [GA] Describes possible use of xpointer() scheme based on XPointer Framework as a mechanism for referencing extrinsic text. Some discussion of how extrinsic text would work preceded. Action: [GA] Ask SVG WG to comment on TT AF REQ WD. ************************************************************************ Content Requirements ************************************************************************ R211: Do we need <br> or equivalent? What would equivalent be for R210? What style property to use to indicate manual line breaking only? Resolution: Remove paragraph from R209; add text or note indicating that a "role" aspect is applicable to div/phrase; add note that two possible roles are "paragraph" and "line", the first for "division", the second for "phrase"; add "forced line break" item to R210, add note indicating it may be expressed as <tt:character character="&x000A;"/> or as something more succinct like <tt:br/> Under R211, remove paragraph <-> block relation. [SH] How to handle bidi controls if we use Unicode in XML and Other Markup Languages recommendation to not use Unicode LRE, RLE, LRO, RLO, PDF. R211: I-13: [DS] Under R211, do we want to support an intermediate mode where bidi and glyph sub still occur post authoring? Resolution: Support by adding ability to express hard line break and no wrap option via styles. R217: I002: Should block level graphics be supported, e.g., to permit pre-rasterization of entire lines or blocks of lines of text to serve as alternate content? For example, some subtitling systems use pre-rasterized text that are represented as bitmap graphics. For example, use SVG for embedded scalable graphics and data: URI scheme for embedded raster graphics: <tt:graphic src="data:image/png;version=1.2;base64,..." alt="Dave's Face"/> See RFC2397 for definition of "data:" scheme. Note that this may present a problem for anything but small graphics due to the length of URI support in a UA. This need not be a problem in TT AF however. Resolution: Support pre-rasterized text as alternative graphic representation but to require text form always be present in such a case. R218: I-14: Whether to support embedded or external references to audio clips to handle aural presentation? New requirement or handle under R218? Resolution: Support both embedded and non-embedded audio for symmetry with graphics. Can be used as alternative representation of text and for cueing and other purposes in authoring context. R219: Do we want to say anything about font formats? What formats? How to embed? For embedding, using SVG or data:. Similar to graphics. Maybe in a note, offer as example the use of SVG and TrueType/OpenType. Probably want to express generically in solution space, e.g., could say that it must be in either XML in another namespace or use data:, etc. R221: Descriptive vocabulary review. Is this sufficient? Too much? [DS] Propose to treat this as general metadata. [GA] Thinks this is data essence in our context. It provides essential semantic context by means of which presentation and other processing choices may be derived. [DS] Association of text with style/presentation is separate from the association of text with descriptive information. e.g., <tt:style> .fred { color : green } </tt:style> <tt:text class="fred">...</tt:text> vs <tt:style> speaker.[role="fred"] { color : green } .blue { color : blue } </tt:style> <tt:cast> <tt:actor id="fred"/> <tt:actor id="barney"/> <tt:/cast> <tt:speaker role="fred"> <tt:text class="blue">...</text> <tt:text>...</text> </tt:speaker> Issues: 1. how to express independent but related hierarchical structures and to specify cross-references between structural elements; 2. whether to define an extremely minimalist but generic mechanism for ascribing descriptive aspects Some Important Structural Characteristics * scene (scripted); * shot (may or may not be scripted, but accurate information is needed independently of scripting); * utterance: character, what is said, where is speaker: side, center, offscreen, voice: falsetto, bass * sound Proposed Text for R221: [DS] Should be capable of associating text with descriptive information from the appropriate domain of discourse. [GA} When the text is a representation of speech, then this descriptive information should support denoting the speaker, the manner of speech, and the context in which the speech occurs. [JB] Case of putting text into context. Action: [GA] Craft new text for R221 based on above. Remove vocabulary. Leave in informative note refering to TEI as possible source. R293: Propose to change to RELAX NG Compact Syntax with alternative using XSD. Resolution: Change to make RELAX NG the normative schema and to also possibly define informative W3C schema that is be equivalent. Resolution: Remove R294. Can't specify DTD if we allow non-declared element types, e.g., to graft sub-islands (sub-trees) in an extension namespace. R295: I003: Do we want to support a set of named character entities, such as those used with HTML/XHTML (which derive from SGML named character entities)?   == Already supported in XML: > '>' <foo bar=">"/> < '<' & '&' <!DOCTYPE tt SYSTEM "mydtd.dtd" [ <!ENTITY nbsp " "> <!ELEMENT foo:bar EMPTY> <!ATTLIST foo:bar baz CDATA #IMPLIED > <!ENTITY % xhtmlentities SYSTEM "entities.ent"> %xhtmlentities; <!ENTITY extent PUBLIC "..." SYSTEM "foo.bar" NOTATION binary> <!ENTITY ident "a;lkj;alkdsf;jakldsf"> <!NOTATION binary PUBLIC "-//W3C//NOTATION Binary Data//EN"> ]> <tt:tt foo="&extent;> <tt:head> <tt:metadata> <foo:bar/> </tt:metadata> </tt:head> ... </tt:tt> element meta = { anyElement, anyAttribute } Resolution: Remove R295. This does not preclude an author from using and defining their own general internal entities or importing them from an external file that defines them, e.g., those that are defined for XHTML. ************************************************************************ DAISY Consortium Demonstration ************************************************************************ [MG] Gives demo of Daisy Book viewer. [GA] Asks what Daisy wants to see from TT WG. [MG] Very interested in extrinsic text support. [GA] Should we only support extrinsic text and leave all formatting to external documents? [SH] Leaning towards extrinsic text as focus, should define two separate document types. [JB] Problems with animation if you do all layout externally. [JB] Other problem with cascading style when you want to override style/layout in external document. [DS] Keep simple things simple. Certain simple cases where it is nice to have internal text. [*] Need to think about how inherit style/layout from extrinsic text and yet retain ability to override style/layout from TT document. Conclusion: don't eliminate intrinsic text. On the hand, don't try to reinvent full page layout system in intrinsic text; i.e., rely upon existing document types as much as possible for use with extrinsic text. [JB] In TT, hope to support flowing text temporally within a display region. Possible impedence match with CSS which assumes you flow text all at once, and thus either fills available space or overflows space. [GA] We appear to need more equipment to handle following cases: * snake mode - fill text into area over time, filling line with words/ chars over time and filling block with lines over time, scrolling out old lines (at before edge of block) to create space for new lines (at after edge of block) as required. * normal mode - fill text into area at once, no delay between words/chars, no delay between lines and also clear out previous display (called pop-on mode in EIA608) * ticker tape mode - one line, smooth scroll old words/chars out of start edge of line area and new words/chars in to end edge of line area Conclusion: Need to add requirements to support these forms of temporal layout: snake, block (pop-on), ticker tape. [SH] Suggests adding new requirement under heading of "Style Parameters - Temporal". [SH] Thinks that scroll may permit reflow, e.g., in CSS if you have a float with a paragraph in a box that overflows the box, then as you scroll the content in the box, it should reflow around the float. [GA] Hasn't seen this supported in any UA, but is certainly possible. Doesn't know if CSS authors anticipated this. ************************************************************************ Styling Requirements ************************************************************************ R305: Review aural style properties. R306: Review visual style properties. [GA] Reviews XSL FO notions of writing direction independent dimensions and directions. R306: Do we need a property to disable automatic line breaking for both (either) logical and (or) presentation content vocabulary? [GA] Appears to be handled already by "wrap-option" property from XSL, called "line wrapping option" in R306. R306: Need to have property that has semantic of forcing a line break after or before selected content. Action: [GA] Investigate how to force line break before or after an element via style property in a context where CR/NL are being treated as whitespace for the purpose of automatic wrapping. Q: how to implement in XSL the following: e:before { content: "\A" } e:after { content: "\A" } when e is an inline element in a block context that performs linefeed normalization (to space) and does auto wrap? Open Issue: Should we support any "float" related style properties since we allow embedded inline and block graphics? [SH] Implementation record is spotty; may want to avoid for now. ************************************************************************ Timing Requirements ************************************************************************ I-12: Need to define interaction of time containers with content containment structure, e.g., in terms of time action semantics. [GA] Describes XSS graphics [1] showing seq, par, excl cueing behavior. Describes intrinsic time action semantics. [SH] May be better to define all intrinsic time actions as "none", then use sub-element, e.g., animate, set, to effect presentation change/behavior. Maybe possible to incorporate <set> features into <cue> directly (i.e., as attributes). EXAMPLE OF SNAKE MODE <style> span { visibility : hidden } </style> <div> <p id="p1">One Two Three</p> </div> <seq> <cue select="#xpointer(id('p1')/word(1))" dur="5s"> <set attributeName="visibility" to="visible" fill="freeze"/> </cue> <cue select="#xpointer(id('p1')/word(2))" dur="5s"> <set attributeName="visibility" to="visible" fill="freeze"/> </cue> <cue select="#xpointer(id('p1')/word(3))" dur="5s"> <set attributeName="visibility" to="visible" fill="freeze"/> </cue> </seq> N.B. Note novel use of new "word()" function which selects words. Alternate format (that requires text be sub-divided into segments each of which are to be atomically selected for application of timing): <tt> <head> <style> span { visibility : hidden } </style> <smil id="t1"> <head> <defs> <set id="a1" attributeName="visibility" to="visible" fill="freeze"/> <set id="a2" attributeName="color" to="red"/> </defs> </head> <par> <seq> <cue select="#w1" dur="5s" animation="a1"/> <cue select="#w2" dur="5s" animation="a1"/> <cue select="#w3" dur="5s" animation="a1"/> </seq> <seq begin="2.5s"> <cue select="#w1" dur="5s" animation="a2"/> <cue select="#w2" dur="5s" animation="a2"/> <cue select="#w3" dur="5s" animation="a2"/> </seq> </par> </smil> <head> <body> <div> <p> <span id="w1">One</span> <span id="w1">Two</span> <span id="w1">Three</span> </p> </div> </body> </tt> CONSTRUCTED 3GPP EXAMPLE <p style="position:absolute;top:100px;left:100px;width:100px;height:100px"> <seq> <t dur="2.5s">One</t> <t dur="2.5s" style="color">One</t> <t dur="2.5s"> <t style="red">One</t> <t>Two</t> </t> <t dur="2.5s"> <t>One</t> <t style="red">Two</t> </t> <t dur="2.5s"> <t>One</t> <t style="red">Two</t> <t>Three</t> </t> <t dur="2.5s"> <t>One</t> <t>Two</t> <t style="red">Three</t> </t> </seq> </p> Could not be done by XSLT without major brain surgery. Resolution: Doesn't need change to requirements document, this is in the spec process. Review as spec is written. Open Issue: Whether to enhance R403 to include repeat, restart, fill, and endsync parameters? Action: [MG] Investigate need to add endsync parameter to R403. Action: [TM] Investigate need to add repeat, restart, fill parameters to R403. ************************************************************************ Animation Requirements ************************************************************************ R501: Review scroll animation. R502: Review hightlight animation. R503: Review fade (transition) animation. Action: [GA] Remove editorial notes under R501-R503 without any further change. These notes are part of the process of defining solution space. N506: I004: Need to determine if we want to support animation that would trigger reformatting (reflow). If we do, then we would want to allow animation of all style parameters. Action: [GA] Add display style property to R505, and add note that animation of some properties require reflow (relayout) of content, e.g., display property. Action: [GA] Close issue I004; resolution: permit animation of styles that cause reflow. Action: [GA] Add note to N506 explaining that "animation of content" means to change element content via animation, e.g., adding/changing text() or element children nodes. Action: [GA] Add note under introduction "Don't Panic! You may not have to deal with complicated feature X because we plan to profile via documents types." [Sean's suggestion.] ************************************************************************ Metadata Requirements ************************************************************************ R600: Need to add note explaining that "arbitrary metadata" means metadata items defined in any dictionary, etc. R606: Review proposed core metadata items. Resolution: Remove R602, R606-R608 on the grounds that this is too specific for the requirements document which has the intention of supporting arbitrary metadata. Resolution: Do not define our own metadata items unless absolutely necessary, and then only after consulting with appropriate metadata dictionary publishers. R606: I005: The rights metadata item defined by [DCMES 1.1] has not been included here, pending further consideration of whether and what intellectual property rights management (IPRM) related metadata to explicitly support in the TT AF. Action: [GA] I005 is closed by virtual of removal of R606. R607: Review proposed core additional metadata items. R607: I006: The accessRights metadata item defined by [DCMES 1.1] has not been included here, pending further consideration of whether and what intellectual property rights management (IPRM) related metadata to explicitly support in the TT AF. Action: [GA] I006 is closed by virtual of removal of R607. R607: Review media related metadata items. Need to either give some definition to or reference a document that defines the above media related metadata items. Perhaps MPEG-7 or SMPTE Metadata Dictionaries? ************************************************************************ Next Meeting Schedule ************************************************************************ Oct 1-2, 2003, Redmond, WA (Microsoft) ************************************************************************ Where To Go From Here ************************************************************************ Action: [DS] Will contribute a 3GPP Timed Text example. Action: [SH] Will contribute SAMI examples. Action: [DK] Will contribute spec on AML and example. Will check on guidelines document. Action: [SH] Will cook AML example as SAMI as a variant example. Action: [MG] Will contribute Daisy time text example. Action: [GA] Will contribute some examples showing generic timed text. Action: [JB] Will check if he can contribute examples. Action: [TM] Will contribute generic example. Milestones: Jun 15 Jul 15 Spec Outline Aug 15 Sep 15 1st Internal WD Oct 15 1st Public WD Nov 15 Dec 15 Last Call WD (will probably slip - but don't know yet) ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ START SUMMARY ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ *** RESOLUTIONS *** Resolution: Wait until Aug 15 to publish next WD or as Note. Group may discuss and decide either way over next few months time. Create internal WG draft with changes for review. Resolution: Change name of S002 to "Description"; add note explaining that when description is rendered by an audio track, that it is called "Audio Description". Also add note saying this is not intended to serve as metadata description. Resolution: Remove paragraph from R209; add text or note indicating that a "role" aspect is applicable to div/phrase; add note that two possible roles are "paragraph" and "line", the first for "division", the second for "phrase"; add "forced line break" item to R210, add note indicating it may be expressed as <tt:character character="&x000A;"/> or as something more succinct like <tt:br/>. Resolution: Support intermediate mode between full flow with line breaking and non-flow by adding ability to express hard line break and no wrap option via styles. This permits line break decisions to be made at authoring time, but char to glyph mapping and bidi post-authoring. Resolution: Support pre-rasterized text as alternative graphic representation but to require text form always be present in such a case. Resolution: Support both embedded and non-embedded audio for symmetry with graphics. Can be used as alternative representation of text and for cueing and other purposes in authoring context. Resolution: Remove R294. Can't specify DTD if we allow non-declared element types, e.g., to graft sub-islands (sub-trees) in an extension namespace. Resolution: Remove R295. This does not preclude an author from using and defining their own general internal entities or importing them from an external file that defines them, e.g., those that are defined for XHTML. Resolution: Doesn't need change to requirements document, this is in the spec process. Review as spec is written. Resolution: Remove R602, R606-R608 on the grounds that this is too specific for the requirements document which has the intention of supporting arbitrary metadata. Resolution: Do not define our own metadata items unless absolutely necessary, and then only after consulting with appropriate metadata dictionary publishers. *** WORKING ASSUMPTIONS *** *** OPEN ACTION ITEMS *** Action: [GA] To construct table of features/requirements for which we need a volunteer to provide example(s). Also solicit examples based on existing systems, such as 3GPP, AML, SAMI, etc. Action: [GA] Change second para of last note of S001 to change "hearing impaired" to "deaf and hard of hearing". Action: [GA] Add note under S001 saying that some communities use the term Subtitling to encompass both uses described here (i.e., captioning and subtitling per S000 and S001). Action: [GA] Ask SVG WG to comment on TT AF REQ WD. Action: [GA] Craft new text for R221 based on above. Remove vocabulary. Leave in informative note refering to TEI as possible source. Action: [GA] Investigate how to force line break before or after an element via style property in a context where CR/NL are being treated as whitespace for the purpose of automatic wrapping. Q: how to implement in XSL the following: e:before { content: "\A" } e:after { content: "\A" } when e is an inline element in a block context that performs linefeed normalization (to space) and does auto wrap? Action: [MG] Investigate need to add endsync parameter to R403. Action: [TM] Investigate need to add repeat, restart, fill parameters to R403. Action: [GA] Remove editorial notes under R501-R503 without any further change. These notes are part of the process of defining solution space. Action: [GA] Add display style property to R505, and add note that animation of some properties require reflow (relayout) of content, e.g., display property. Action: [GA] Close issue I004; resolution: permit animation of styles that cause reflow. Action: [GA] Add note to N506 explaining that "animation of content" means to change element content via animation, e.g., adding/changing text() or element children nodes. Action: [GA] Add note under introduction "Don't Panic! You may not have to deal with complicated feature X because we plan to profile via documents types." [Sean's suggestion.] Action: [GA] I005 is closed by virtual of removal of R606. Action: [GA] I006 is closed by virtual of removal of R607. Action: [DS] Will contribute a 3GPP Timed Text example. Action: [SH] Will contribute SAMI examples. Action: [DK] Will contribute spec on AML and example. Will check on guidelines document. Action: [SH] Will cook AML example as SAMI as a variant example. Action: [MG] Will contribute Daisy time text example. Action: [GA] Will contribute some examples showing generic timed text. Action: [JB] Will check if he can contribute examples. Action: [TM] Will contribute generic example. *** OPEN ISSUES *** Issue (I0306-001): Should we support any "float" related style properties since we allow embedded inline and block graphics? Issue (I0306-002: Whether to enhance R403 to include repeat, restart, fill, and endsync parameters? *** URLs *** [1] http://www.w3.org/MarkUp/Group/2003/xss-1-0/ [2] http://www.w3.org/AudioVideo/Group/XMMF/XMMF.html *** NEXT MEETING DATES *** Oct 1-2, 2003, Redmond, WA (Microsoft) ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ END SUMMARY ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Thierry MICHEL W3C/ERCIM
Received on Tuesday, 9 March 2004 09:12:22 UTC