Re: TTML Agenda for 15/05/13 - Proposed updates to charter

On Thu, Jun 6, 2013 at 9:24 PM, Sean Hayes <Sean.Hayes@microsoft.com> wrote:
>>I may not fully understand what you are trying to achieve, so bear with me. What I read (and I may be wrong) is that you want WebVTT to map >to WebVTT objects ("WebVTT Node objects, see http://dev.w3.org/html5/webvtt/#webvtt-cue-text-parsing-rules), and TTML to map to TTML >objects, then these objects to map to some abstract object model before mapping that abstract object model to HTML objects for rendering?
>
> No. What I am suggesting is modifying the specifications to define WebVTT to map to TBDO objects and define TTML to map to TBDO objects. Where TBDO is the to be decided object model, as I point out the internal object model of both formats are simple enough that designing TBDO is pretty trivial, although it does require a willingness on both sides to change their specs. If that basic spirit of cooperation is not present then we might as well forget the entire enterprise.
>
>> I would keep this exercise separated from the WebVTT, the TTML, and the HTML spec and not require implementation. It's mostly interesting for conversions
>
> I believe this is in fact a perfectly viable approach for implementation for reasons I can't discuss on a public mailing list.
>
>>BTW: have you thought about that you could just define one of the two to be the abstract object model and map the other one and any other format to it?
>
> Yes I believe the XML Infoset would be the better more established choice, however I realize that this would set off the anti-XML knee-jerk reaction, so I'm not necessarily wedded to that idea.
>
>>All browsers that implemented more than the basic text support for WebVTT implemented creation of WebVTT Node objects as specified in
>>the WebVTT spec, see http://dev.w3.org/html5/webvtt/#webvtt-cue-text-parsing-rules . Those node objects are being mapped to HTML
>>DOM nodes in http://dev.w3.org/html5/webvtt/#webvtt-cue-text-dom-construction-rules
>
> And that is fine, TBDO is mostly just a naming exercise, since the objects in WebVTT are not really much more than names anyway, any  implementations wouldn't have to change necessarily.


I'd be curious to understand what "renaming" would entail. Your
reference to XML Infoset doesn't seem like a mere renaming exercise.
But it seems like I still don't follow what you are trying to achieve.


>>Does TTML provide an explicit rendering algorithm? As I understand it, TTML relied on XSL-FO for rendering... yes, I just found this quote:
>>"For each resulting document instance F, if processing requires presentation on a visual medium, then apply formatting and rendering semantics
>>consistent with that prescribed by [XSL 1.1]."
>
> The term *consistent with* here means that you are free to implement as you will, provided you produce visible results that look like those produced by the reference implementation. And in point of fact CSS, for the requirements of TTML, is indeed consistent with XSL-FO in that sense (since XSL-FO references CSS pretty much for the parts we rely on, except for a few details caused by CSS3 not remaining stable which we are cleaning up). The HTML5/CSS mapping will therefore define the reference rendering for CSS.
>
>>The rendering section of the WebVTT spec is quite complicated and uses many of the specifics of WebVTT cue settings and custom
>> algorithms to avoid cue overlap etc.
>
> Yes, I believe this is the biggest impediment to progress. I think not only are these rules complicated, they are in fact ambiguous to the point of non-interoperability, and possibly containing circular dependencies.

The algorithm is clearly stated and if there are ambiguities, then
they are either bugs in the spec or misunderstandings by the
implementer. Since every step of the algorithm is provided, there
should be no non-interoperable implementations.


> The proposed region additions also seem to not fit well with them at all.

What makes you think so? The spec for regions has been implemented in
blink (and I believe in webkit) with little issues.


> Personally I think it would be much better if the non-overlap constraint was moved into the document conformance, like the timing constraints are and simply rely on CSS with no alterations.

CSS does not do overlap avoidance for explicitly placed blocks of text.


> CSS is at this point a sufficiently general rendering technology that cue settings should be capable of being mapped into un-transformed CSS.

Captions have some specific requirements that CSS is not satisfying
yet. In particular there is a quality captions requirement about
balancing multiline captions for which CSS has no answer. There are
discussions in the CSS WG to come up with a solution, but until then
WebVTT needed to define its own.


> I do find the definition of :past and :future troubling however, given the implications of how often they could cause the CSS engine to run. I would like to see if these could be mapped to CSS animation.

That's an implementation quality issue - the fundamental issue of
changing the format of sections of text is the same, so should be able
to be dealt with in a browser in the same way, no matter if it comes
through animations or pseudo-selectors.


>> I'd leave it to the market to create lossless conversion tools and support them. I wouldn't expect authors to do this by hand.
>
> Given the above, while a good approximation is feasible, I don't think truly lossless is actually possible. Certainly not without a better reference implementation of the WebVTT rendering algorithms.

Have you got proof for that? I thought part of the activity as in the
new charter is actually about identifying how good a conversion can
be. Also, I don't see the relevance of the implementation of the
WebVTT rendering algorithm.


>>Well, I would not want to restrict the development of one format by the feature set available to other formats, or to the object model.
>>You wouldn't want to stop adding features to TTML just because these features are not available in VTT yet and therefore not specified in the
>  >common object model.
>
> Actually I would. The caption using public has suffered for decades because of the continual need to translate from one format to another, which leads to increased costs, delays and errors; which ultimately adds up to a great deal of non-captioned content.  We had a moment in time where it might have been possible to fix that, however for reasons I'm not particularly interested in rehashing we failed to do so. However we may have another opportunity to at least mitigate against it now.

This fails to recognize that both TTML and VTT may not just be used
for captions, but for other things, too. We can't realistically
restrict new features in either format to those that are available in
the "common object model" (whatever that may be).


> I believe that what the caption and subtitle industry, and more importantly the users that are Deaf or hard of hearing, most urgently need is a single lingua-franca; and we are not serving them well if we don't at least try to merge these efforts. To the extent that we have two formats at all, VTT and TTML should be effectively two syntaxes for the same thing, where inter-conversion is a trivial rewrite. If new features are desirable, then they should be desirable, and usable for all formats.

OK, this requires us to start with analysing what the differences are.
I believe you've started that effort and I'm curious to see what you
have found out.

Best Regards,
Silvia.

Received on Thursday, 6 June 2013 23:40:55 UTC