Re: Draft TTML Codecs Registry - Issue-305

On Mon, May 19, 2014 at 9:48 PM, John Birch <John.Birch@screensystems.tv>wrote:

>  Again CIL… (green)
>
>
>
> Best regards,
>
> John
>
>
>
>
> *John Birch | Strategic Partnerships Manager | Screen *Main Line : +44
> 1473 831700 | Ext : 2208 | Direct Dial : +44 1473 834532
> Mobile : +44 7919 558380 | Fax : +44 1473 830078
> John.Birch@screensystems.tv | www.screensystems.tv |
> https://twitter.com/screensystems
>
>
> *Visit us at Broadcast Asia, Marina Bay Sands, Singapore 17-20 June, Stand
> 5E4-01*
>
> *P** Before printing, think about the environment*
>
> *From:* Glenn Adams [mailto:glenn@skynav.com]
> *Sent:* 19 May 2014 12:49
> *To:* John Birch
> *Cc:* Nigel Megitt; mdolan@newtbt.com; public-tt@w3.org
> *Subject:* Re: Draft TTML Codecs Registry - Issue-305
>
>
>
>
>
> On Mon, May 19, 2014 at 8:34 PM, John Birch <John.Birch@screensystems.tv>
> wrote:
>
> Hi Glenn,
>
>
>
> Thanks enormously for the clarifications…
>
> W.R.T.  This is not quite what is meant by a processor profile in TTML.
> In particular, it does not signal what features are used, it signals which
> features must be implemented by processor, which may be more or less than
> what is used by a document.
>
>
>
> I am not sure of the usefulness of “signals which features must be
> implemented by processor, which may be more or less than what is used by a
> document” ?
>
>
>
> As a content author, I might decide that I want processors of document X,
> Y, and Z to support the union of features used by X, Y, and Z; or,
> alternatively, I might be satisfied with processors that support the
> intersection of features used by X, Y, and Z. In the former case, I may end
> up specifying requirements that exceed what is used in any of X, Y, or Z
> taken by themselves; in the latter case, I may end up specifying
> requirements that don't include all features used in X, Y, or Z taken
> together. It is my choice as a content author to determine what I want and
> what I will minimally accept in a processor.
>
> Except that the content author usually has no choice in what processor is
> used.
>

The choice they have is in deciding on the minimum features needed by a
processor, without which their document will not be presented.


> You are implying that there will be a wide range of variability in content
> requirements, and that authors may choose between elaborate requirement
> sets or lesser ones…
>

I'm not discussing content requirements. You and others keep bring up the
subject of content profiles (content requirements), but I am not, or more
accurately, I assign any discussion of content profiles a very low priority
in this discussion. I don't mind discussing them, but I don't want to do so
while we are talking primarily about processor profiles.


> but this is ignoring the clear signal coming from most groups attempting
> to use TTML standards, and certainly ignoring the demands made from content
> publishers. What is being asked for is clear statements about how to write
> a document, and what a processor should do…
>

Those are two different, distinct asks: (1) how to write a document, and
(2) what a processor should do. Could you attempt to separate these
concepts?


> there is no emphasis on being able to define in a document what that
> document needs to be decoded…
>

That is a mistake, and indicates to me that folks don't really know what
they want. Further, it fails to recognize that this is what is defined in
TTML1 today. It is possible that you (and perhaps others) have never
understood TTML1's profile mechanism.


> with the implications that different document instances have different
> needs. It’s a great ideal, but a practical unlikelihood. A given content
> publisher will publish all his content to single specific document
> ‘standard’. He will ideally like to publish to a standard that everybody
> else uses too. Further he will expect that processing decoder implementers
> will develop presentation mechanisms that decode that standard. That is
> what is being asked for… at least that’s all I’m hearing J
>

ok, but I hope you realize that is just one possible characterization; you
could have stated it differently: that decoder's will support one or more
specific feature sets and that for each feature set a particular set of
constraints apply when authoring content; that is, one may be able to infer
a content profile from a processor profile;


>
>
>  What is the point in signalling features within a document instance that
> are not used by that document instance?
>
>
>
> There is an overhead to defining a processor profile. Some simple
> processors may only recognize a pre-defined set of processor profile
> designators, and not support parsing an inlined (embedded) profile
> definition that is closely associated with a specific document instance.
>
>
>
>
>
>  Unless the distinction you are making is to do with feature sets… i.e.
> the document instance uses just some part of a more complete set of
> features identified by a single specific feature tag?
>
>
>
> Labeled pre-defined processor profiles provide a way for processor
> implementors to support a specific feature set, which may exceed the
> features used by any given document.
>
>
>
>
>
> And I am further unsure as to the benefits in placing the processor
> requirements within the document… rather than associating / containing them
> within a content profile.
>
>
>
> Processor constraints are unrelated to content constraints. A profile that
> intentionally or inadvertently mixes the two types of constraints fails to
> understand the use and necessity of the different types of constraints.
>
>
>
>  I admit to finding it difficult to distance content profiles from
> processing profiles, although I do of course understand that the scope of
> feature tags in each domain is different. *E.g. content tags X, Y and Z
> may all require processing feature J. But processing feature J does not
> imply that X, Y and Z appear in all documents that need feature J. *
>
>
>
> For me, it is enough to  be able to declare that Ecosystem Z requires
> feature J and documents may contain X, Y and Z.
>
>
>
> I think this is because you have been accustomed to working in the A/V
> world of specs where there are only one kind of profile and it co-defines
> both encoding requirements (i.e., content format requirements) and decoding
> requirements (i.e., content decode/processing requirements).
>
>
>
> The world of markup, where may content features are optional, and where
> one cannot assume that a processor supports all content features is very
> different, and requires treating the two separately. One needs content
> profiles for validation, but needs processor profiles for guaranteed
> decoder/processing interoperability.
>
>
>
> To be accepted by the world of AV, TTML must co-exist with AV
> expectations. The AV world will not easily take on-board web concepts
> simply to support an adjunct service like Timed Text. I believe that (to
> remain relevant) TTML must bend to AV, not the other way around. To do
> otherwise will create a schism between TTML1 derived specifications and
> TTML2.
>

And what do you think is a barrier to co-existence? What do you think will
cause "a schism between TTML1 derived specs and TTML2"?

I'm just surprised at your overall reactions, it is almost like you never
understood the original TTML1 profiles or chose to interpret them in a way
that was not intended.


>
>
> Nigel makes the point that the discussed approach allows migration into
> ISO workflows, by carrying the processing profile information within the
> documents… however, I do not believe that most creation workflows will
> generate documents that are this well described.
>
>
>
> That's a potential issue, but it is somewhat orthogonal to defining a way
> to define and use standard or private profiles.
>
>
>
>  I find it more likely that such additional metadata about the document
> content and processing requirements will be added to content when it is
> migrated, and as such may need to be added by *automated examination of
> the document contents*.
>
>
>
> That's possible. However, note that as defined, TTML1 specifies that
> either a ttp:profile attribute *or* a ttp:profile element *should* be
> specified in a document instance. The TTV tool will warn if both of these
> are missing.
>
>
>
>
>
> Best regards,
>
> John
>
>
> *John Birch | Strategic Partnerships Manager | Screen *Main Line : +44
> 1473 831700 | Ext : 2208 | Direct Dial : +44 1473 834532
> Mobile : +44 7919 558380 | Fax : +44 1473 830078
> John.Birch@screensystems.tv | www.screensystems.tv |
> https://twitter.com/screensystems
>
>
> *Visit us at Broadcast Asia, Marina Bay Sands, Singapore 17-20 June, Stand
> 5E4-01*
>
> *P** Before printing, think about the environment*
>
>
>
> *From:* Glenn Adams [mailto:glenn@skynav.com]
> *Sent:* 19 May 2014 12:16
> *To:* John Birch
> *Cc:* Nigel Megitt; mdolan@newtbt.com; public-tt@w3.org
>
>
> *Subject:* Re: Draft TTML Codecs Registry - Issue-305
>
>
>
>
>
>
>
> On Mon, May 19, 2014 at 6:47 PM, John Birch <John.Birch@screensystems.tv>
> wrote:
>
> Thanks, understood…
>
> However, I believe most of my comments still stand in that respect.
>
>
>
> Are these agreed requirements and appropriate semantics?
>
>
>
> 1.       A need to signal the constraints / features used in a set of
> documents (content profile). Used in a specification.
>
> I would agree there are use case for defining a content profile. A
> specification that purports to define a content conformance regime should
> define one or more content profiles.
>
>
>
>
>
> 2.       A need to signal conformance to a content profile. Used in a
> document instance.*
>
>  I would agree there are use cases for signaling that a document adheres
> to one or more content profiles, e.g., for validation processing, for
> maintaining invariants in downstream transformations, etc.
>
>
>
> 3.       A need to define the constraints / features used within a
> document instance (processor profile).
>
>  This is not quite what is meant by a processor profile in TTML. In
> particular, it does not signal what features are used, it signals which
> features must be implemented by processor, which may be more or less than
> what is used by a document.
>
>  Used in a document instance and by a processor (and perhaps to describe
> a processor?).
>
>  Both use cases are legitimate: (1) signaling what features must be
> supported in order to process a document, and (2) serving as a convenient
> label (description) of a specific subset of features supported by a
> processor.
>
>
>
>
>
> *A generic processor profile can be inferred from a specified content
> profile (assuming versioning is taken care of), but such an inferred
> processing profile may include constraints/features that have not been used
> in the specific document instance.
>
>
>
> There is more than one reasonable mapping from a content profile to a
> processor profile, so some mapping will have to be selected as a default
> and others will have to be specified within a document instance.
> Alternatively, the document instance can explicitly signal the processor
> profile removing the need to infer it. Indeed, in TTML1, one can only
> signal a processor profile, and cannot signal a content profile.
>
>
>
>
>
> For me, 1 and 2 are crucial… I can live with the inefficiencies of not
> having 3.
>
>
>
> In TTML1, 3 is all we have. TTML2 will add 1 and 2.
>
>
>
>
>
> Best regards,
>
> John
>
>
>
>
> *John Birch | Strategic Partnerships Manager | Screen *Main Line : +44
> 1473 831700 | Ext : 2208 | Direct Dial : +44 1473 834532
> Mobile : +44 7919 558380 | Fax : +44 1473 830078
> John.Birch@screensystems.tv | www.screensystems.tv |
> https://twitter.com/screensystems
>
>
> *Visit us at Broadcast Asia, Marina Bay Sands, Singapore 17-20 June, Stand
> 5E4-01*
>
> *P** Before printing, think about the environment*
>
>
>
> *From:* Nigel Megitt [mailto:nigel.megitt@bbc.co.uk]
> *Sent:* 19 May 2014 10:36
> *To:* John Birch; Glenn Adams
> *Cc:* mdolan@newtbt.com; public-tt@w3.org
>
>
> *Subject:* Re: Draft TTML Codecs Registry - Issue-305
>
>
>
> Hi John,
>
>
>
> The (now ancient) history of this thread is that organisations creating
> 'deliberately constrained patterns' want to be able to signal within the
> unconstrained world of ISO 14496 what their constraints are, rather than
> leaving the detail out and hoping that mechanisms completely out of band
> are used to ensure successful processing. By signalling more precisely it
> is more likely that the documents/streams that are generated can safely be
> passed to other receiving systems and generate sensible outcomes for the
> audience. These more generalised decoders do need to make a decision about
> ability to process.
>
>
>
> Kind regards,
>
>
>
> Nigel
>
>
>
>
>
> On 19/05/2014 10:24, "John Birch" <John.Birch@screensystems.tv> wrote:
>
>
>
>  Hi Glenn,
>
>
>
> For me Validation is the highest priority. Current workflows for audio /
> visual presentations, regardless of delivery mechanism, all rely upon
> validation of the content.
>
> *The ground rule is that all the content is validated and therefore will
> work on a mandatorily compliant processor / decoder that must include the
> features required for the content profile … *
>
>
>
> There is no scope (or concept) of the decoding processor needing to make
> decisions about whether it can process the content, or what it should do
> with parts of the content it does not understand.
>
> These concepts are superfluous non-issues when a) all content is validated
> to an agreed specification… b) all processors / decoders are expected to
> handle all features of that specification, c) any intentional graceful
> degradation of capabilities (features) is already built into the
> specification.
>
>
>
> Thus, for me the two crucial priorities are a mechanism to allow effective
> validation and a mechanism to allow effective specification (by sub-setting
> of TTML features and the usage of those features).
>
>
>
> I appreciate that you are tackling the generic case of an ‘unknown
> provenance content’ being decoded by a ‘processor with unknown (to the
> author) capabilities’. However, deployed standards derived from TTML
> operate in deliberately constrained patterns, patterns that intentionally
> reduce the potential for mismatches between content creation and content
> presentation.
>
>
>
> RE: Now, if I author content using tts:opacity in a way that conforms to
> a content profile, then that says nothing whatsoever about whether a
> presentation processor will actually implement and use that opacity when
> presenting content. This is why a process profile declaration is important.
> It allows me to say: only present this on a processor that understands and
> uses opacity. This is not a specious use case.
>
>
>
> I cannot imagine a situation (in deployed AV broadcast workflows using
> TTML derived standards) where a content author would actually use a feature
> that might not be present in all the targeted decoders… the need for
> ‘content to present’ would pretty much dictate the use of the lowest common
> denominator of features that the author has experienced as working in the
> majority of decoders. This has been proven in practise. So I think the
> above is only half the use case, the second part of this use case would be
> provide a mechanism for alternative behaviour.
>
>
>
> I.e., should an author deliberately wish to use advanced features, I
> believe they would do so only if there was a clear ‘fall back strategy’,
> this would either be implemented as part of the specification… e.g.
> “opacity if not supported should be presented as fully opaque”, or by
> putting a switch in the content along the lines of “if processor supports
> opacity then X, else Y”. A classic example of a fall back strategies
> deployed in previous timed text implementations is the Teletext mechanism
> for extending the character set of basic Teletext to include accented
> characters. In this mechanism a basic level ‘base character’ is
> transmitted, and then the extended accent character is transmitted… the
> extended characters have an intrinsic backspace, so advanced decoders
> overwrite the base character… basic decoders do not eat the extended
> characters so they remain displaying the base character.
>
>
>
> RE: Furthermore, if a document that uses opacity does not declare (or
> otherwise signal) a processor profile, but only a content profile, then a
> process that infers the former from the latter is ambiguous (without
> further directives) since it is not a logical conclusion that support for
> opacity must be present in a processor when the use of opacity in content
> is optional. This is where I suggested in a previous thread the need for
> something like:
>
>
>
> I see no strong use case for supporting a situation where a content
> profile defines a feature as optional and processors are then free to
> either support that feature or not. This would result in a situation where
> some content that conforms to the content profile would display correctly
> (by which I mean as intended) on some compliant processors, but not display
> correctly on other processors that could still be called compliant. That is
> not a route to ‘interoperability of content’. So I contend that it is
> ‘logical’ to conclude as a practical implementer that if a feature is
> optional in content I should support it in my processor, since ‘logically’
> I may be expected to decode content that may include that feature.
>
>
>
> I can however see some small utility in being able to say ‘this feature is
> not used in this document instance’ – since that might allow a processor to
> optimise itself for processing that document instance, (e.g. by not loading
> heavy libraries of code etc.). However, I am unconvinced that most
> processor implementations would dynamically adapt to each document instance.
>
>
>
> RE my closing comment “"strategy for TTML2 that requires general web
> access, etc." – what I meant is that any new capabilities in TTML2 that are
> defined to address these issues need to be non-referential. I.e. it must be
> possible to have content that can be validated independently of the
> Internet, and processors that can determine the requirements to decode any
> content without access to the internet. Further, the process of determining
> processing requirements should be lightweight, since the processors are in
> many deployed cases likely to be operating in constrained environments.
>
>
>
> With best regards,
>
> John
>
>
>
>
> *John Birch | Strategic Partnerships Manager | Screen *Main Line : +44
> 1473 831700 | Ext : 2208 | Direct Dial : +44 1473 834532
> Mobile : +44 7919 558380 | Fax : +44 1473 830078
> John.Birch@screensystems.tv | www.screensystems.tv |
> https://twitter.com/screensystems
>
>
> *Visit us at Broadcast Asia, Marina Bay Sands, Singapore 17-20 June, Stand
> 5E4-01*
>
> *P** Before printing, think about the environment*
>
>
>
> *From:* Glenn Adams [mailto:glenn@skynav.com <glenn@skynav.com>]
> *Sent:* 18 May 2014 22:46
> *To:* John Birch
> *Cc:* nigel.megitt@bbc.co.uk; mdolan@newtbt.com; public-tt@w3.org
> *Subject:* Re: Draft TTML Codecs Registry - Issue-305
>
>
>
>
>
> On Sun, May 18, 2014 at 5:39 PM, John Birch <John.Birch@screensystems.tv>
> wrote:
>
> An extremely important context for content profile is for an 'application'
> (validator) to be able to determine if any given document conforms to a
> specific profile. Note this is not necessarily the same as that application
> being able to decode or present the document.
>
>
>
> I've already stated that validation is a use case for content profile.
> However, we have yet to specify validation semantics for a TTML Content
> Processor, though it is on my list to add a @validation property.
>
>
>
>
> In fact the 'processor A can determine if it can decode document X' debate
> is somewhat specious, since (at least in the context of most current TTML
> derived specifications) most processors should be able to safely assume
> that the documents they are 'asked' to decode conform to a specific
> 'standard', having been passed through a validation step before being
> presented.
>
>
>
> The ability to decode is not as important as the ability to process the
> explicit/implied semantics of a particular feature/extension. All TTML
> processors must be able to instantiate an abstract TTML document instance,
> so, in that sense, every processor can decode any TTML profile. It is
> whether they do something with what they decode that is relevant.
>
>
>
> It is not specious to state that a content author may wish to ensure that
> a processor will support (respect) the semantics of some feature. For
> example, take the #opacity feature, as denoted by the tts:opacity style
> attribute. This is an optional feature in all content profiles defined thus
> far. Optional in the sense that it may be used but need not be used.
>
>
>
> Now, if I author content using tts:opacity in a way that conforms to a
> content profile, then that says nothing whatsoever about whether a
> presentation processor will actually implement and use that opacity when
> presenting content. This is why a process profile declaration is important.
> It allows me to say: only present this on a processor that understands and
> uses opacity. This is not a specious use case.
>
>
>
> Furthermore, if a document that uses opacity does not declare (or
> otherwise signal) a processor profile, but only a content profile, then a
> process that infers the former from the latter is ambiguous (without
> further directives) since it is not a logical conclusion that support for
> opacity must be present in a processor when the use of opacity in content
> is optional. This is where I suggested in a previous thread the need for
> something like:
>
>
>
> ttp:inferProcessorProfileMethod = (loose|strict) : loose
>
>
>
> where loose maps optional content features to optional support in
> processor profile, and strict maps optional in content to required in
> processor. The fact that such a directive is required demonstrates that
> content profiles and processor profiles are different and must be treated
> so.
>
>
>
>
>
>
>
>
> Why, because typical current TTML decoders are operating in constrained
> environments. Usually restricted access to the web. Speed expectations of
> decoding and limited Cpu and memory. IMHO a strategy for TTML2 that
> requires general web access, or anticipates building upon extensive web
> infrastructures and code bases will not resolve the issues faces by current
> TTML implementations in Smart TV, disc/pvr players or set-top boxes.
>
>
>
> I'm not sure where this latter sentence is coming from. What do you refer
> to when you say "strategy for TTML2 that requires general web access, etc."?
>
>
>
>
> Best regards,
> John
>
>
>
> *From*: Glenn Adams [mailto:glenn@skynav.com]
>
> *Sent*: Sunday, May 18, 2014 02:39 AM GMT Standard Time
> *To*: Nigel Megitt <nigel.megitt@bbc.co.uk>
> *Cc*: Michael Dolan <mdolan@newtbt.com>; TTWG <public-tt@w3.org>
> *Subject*: Re: Draft TTML Codecs Registry - Issue-305
>
>
>
>
> On Fri, May 16, 2014 at 2:51 AM, Nigel Megitt <nigel.megitt@bbc.co.uk>
> wrote:
>
> On 15/05/2014 23:45, "Glenn Adams" <glenn@skynav.com> wrote:
>
>
>
>   Could you cite the exact documents/sections that you are referring to
> by "quoted text"?
>
>
>
> I was referring to the text from ISO/IEC 14496-12, AMD2 that Mike included
> in his email.
>
>
>
> I assume you refer to:
>
>
>
> From 14496-12, AMD2:
>
> namespace is a null-terminated field consisting of a space-separated
> list, in UTF-8 characters, of
> one or more XML namespaces to which the sample documents conform. When
> used for metadata,
> this is needed for identifying its type, e.g. gBSD or AQoS [MPEG-21-7] and
> for decoding using XML
> aware encoding mechanisms such as BiM.
>
> schema_location is an optional null-terminated field consisting of a
> space-separated list, in UTF-
> 8 characters, of zero or more URL’s for XML schema(s) to which the sample
> document conforms. If
> there is one namespace and one schema, then this field shall be the URL of
> the one schema. If there
> is more than one namespace, then the syntax of this field shall adhere to
> that for xsi:schemaLocation
> attribute as defined by [XML]. When used for metadata, this is needed for
> decoding of the timed
> metadata by XML aware encoding mechanisms such as BiM.
>
>
>
> This tells me nothing of why one would want to signal content profile or
> why one would want to communicate namespace usage separately (from XMLNS
> declarations found in the document).
>
>
>
>
>
>
>
>
>
> Regarding
>
>
>
> The processing behaviour may or may not be expressed in terms of
> TTML1-style profile features. There's no other language other than prose
> available for this purpose (that I know of).
>
>
>
> If a specification defines processing semantics that must be supported in
> order for a processor to conform to the specification, and if that
> specification does not define any feature/extension, then I would firstly
> view that as a broken specification; however, another potential
> interpretation is that the specification implies an otherwise unnamed
> feature/extension whose feature/extension designation corresponds to the
> profile designation. That is, the profile designation serves as a
> high-level, un-subdivided designation of the set of semantics mandated by
> compliance with the defined profile.
>
>
>
> Concerning 'broken' I note also TTML1SE §3.3 [1] does require an
> implementation compliance statement (ICS) to support claims of compliance –
> it would seem reasonable to require this as an input to the registration
> process. Or in TTML2 weaken this requirement.
>
>
>
> [1] http://www.w3.org/TR/ttml1/#claims
>
>
>
>
>
> This might be a way out of this without having to have such specifications
> define individual, fine-grained feature/extension designations.
>
>
>
> Yes, that would be helpful to lower the barrier to entry.
>
>
>
>
>
> Anyway, I'm still waiting for a someone to articulate a use case for
> signaling a content profile, or any aspect of a content profile (e.g.,
> namespaces, schemas).
>
>
>
> Did Mike's email including the relevant sections from 14496-12 not do this?
>
>
>
> No, it does not. I repeat, signaling content profile can only have two
> purposes in the context of decoding/processing as far as I can tell:
>
>
>
> (1) to validate incoming document, which is not yet done by any TTML
> processor, though we are looking at adding a @validation attribute in TTML2
> that could be used to require this;
>
>
>
> (2) to imply a processor (decoder) profile in lieu of explicitly signaling
> of a processor profile;
>
>
>
> In the context of the current thread, it seems only the second of these is
> potentially relevant. However, I have to ask why one wouldn't simply signal
> a processor profile instead of using a more complex process of signaling a
> content profile and then having the decoder/processor infer a processor
> profile from that content profile.
>
>
>
> If there are other reasons for signaling content profile (in the context
> of the current thread) then I haven't seen them articulated.
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Thu, May 15, 2014 at 1:28 PM, Nigel Megitt <nigel.megitt@bbc.co.uk>
> wrote:
>
>   Since namespaces and schemas define and constrain document contents
> without defining processing behaviour the quoted text defines a content
> profile declaration. It isn't asking for anything concerning specific
> processor capabilities but is merely describing  the contents of the
> document. The information may be used for downstream processing by context
> aware processors. The reference to namespace-aware compression makes clear
> that the mapping from whatever label scheme we choose to namespaces and
> schemas is important.
>
>
>
> However it's clear that we expect the receiving system to use the
> information to direct its processing, as described previously.
>
>
>
> Consider that the specification of a TTML variant x consists of the union
> of a content profile Cx and a description of processing behaviour Bx, which
> I'll express as S = C + B. The content profile shall itself reference one
> or more namespaces and schema locations. The processing behaviour may or
> may not be expressed in terms of TTML1-style profile features. There's no
> other language other than prose available for this purpose (that I know of).
>
>
>
> It is possible to define two specifications S1 and S2 where S1 = Cx + Bx
> and S2 = Cx + By, i.e. the same contents are processed with different
> behaviour. By the quoted text there is no need to differentiate between
> them from an ISO 14496 perspective. However we understand from our
> knowledge of the problem space that it may be useful to signal to a
> receiving system which behaviour set is desirable. And it may be helpful in
> a receiving system to differentiate between the available behaviours in
> order to provide the optimal experience.
>
>
>
> Would it be contrary to the spirit of the ISO wording to assign short
> labels each corresponding to some Specification, and for receiving systems
> to be expected to dereference (using a cached lookup table!) from those
> labels to the namespaces and schema locations contained within that
> specification's content profile? This would satisfy the ISO requirements
> and permit us to signal additionally the processor features and behaviours.
> At this stage the expression of those is not our concern – just that there
> is a document somewhere that describes how the implementation should work.
>
>
>
> Going back to the previous example, if a document conforms to Cx then it
> could be signalled either as S1 or S2 or both, and if the content provider
> has verified that presentation will be acceptable either way then both S1
> and S2 would be declared, otherwise just one of them (or neither if there's
> some other Sn that also uses Cx).
>
>
>
> With this scheme combinatorial logic wouldn't really make sense – you
> could infer something about unions and intersections of content profiles
> but since the language used to describe processor behaviours can't be
> mandated (okay it could in theory, but it wouldn't be accepted in practice)
> it wouldn't be a well defined operation. Incidentally this is in no way a
> critique of the effort put in by Glenn, and its outcomes, in terms of
> defining content and processor profiles – though it might be nice to verify
> that this simple expression can be expanded into that scheme should a
> specification writer choose to do so.
>
>
>
> This implies that every combination of content profiles and behaviours
> must be considered carefully and registered as a new specification with a
> new label. It also implies that if a document declares conformance with a
> set of specifications then it must conform to every member of the set of
> content profiles and it may be processed according to any one of the set of
> processing behaviours.
>
>
>
> The expression of that set is as described previously, where we pick our
> favourite delimiter out of a hat made out of ampersands.
>
>
>
> Also: this topic was discussed in summary briefly on the call today and a
> new suggestion arose, that some guidance for 'reasons why the TTWG would
> reject an application for registration' would be helpful. When requiring
> combinations to be registered separately there's a greater need to ensure
> that the registration process is quick and painless, and this guidance
> would help us and those who may follow to expedite it.
>
>
>
> Nigel
>
>
>
>
>
> On 15/05/2014 18:00, "Michael Dolan" <mdolan@newtbt.com> wrote:
>
>
>
>    I believe the problem statement is to replace the potentially unwieldy
> long strings in the namespace & schema_location fields defined in 14496-12
> and 14496-30 with a more compact string suitable for the DASH manifest
> codecs field.
>
>
>
> From 14496-12, AMD2:
>
>
>
> namespace is a null-terminated field consisting of a space-separated
> list, in UTF-8 characters, of
>
> one or more XML namespaces to which the sample documents conform. When
> used for metadata,
>
> this is needed for identifying its type, e.g. gBSD or AQoS [MPEG-21-7] and
> for decoding using XML
>
> aware encoding mechanisms such as BiM.
>
>
>
> schema_location is an optional null-terminated field consisting of a
> space-separated list, in UTF-
>
> 8 characters, of zero or more URL’s for XML schema(s) to which the sample
> document conforms. If
>
> there is one namespace and one schema, then this field shall be the URL of
> the one schema. If there
>
> is more than one namespace, then the syntax of this field shall adhere to
> that for xsi:schemaLocation
>
> attribute as defined by [XML]. When used for metadata, this is needed for
> decoding of the timed
>
> metadata by XML aware encoding mechanisms such as BiM.
>
>
>
> I’m warming up to the idea of requiring TTML content profiles be created
> for the combinations.
>
>
>
>                 Mike
>
>
>
>
> *John Birch | Strategic Partnerships Manager | Screen *Main Line : +44
> 1473 831700 | Ext : 2208 | Direct Dial : +44 1473 834532
> Mobile : +44 7919 558380 | Fax : +44 1473 830078
> John.Birch@screensystems.tv | www.screensystems.tv |
> https://twitter.com/screensystems
>
>
> *Visit us at Broadcast Asia, Marina Bay Sands, Singapore 17-20 June, Stand
> 5E4-01*
>
> *P** Before printing, think about the environment*
>
>
>
> *From:* Glenn Adams [mailto:glenn@skynav.com <glenn@skynav.com>]
> *Sent:* Thursday, May 15, 2014 9:15 AM
> *To:* Nigel Megitt
> *Cc:* Michael Dolan; TTWG
> *Subject:* Re: Draft TTML Codecs Registry
>
>
>
> My understanding from Dave was that the problem is how to answer the
> following method:
>
>
>
> boolean canPlay(String contentTypeWithParameters)
>
>
>
> I have not seen any statement of a problem that relates to signaling
> content conformance.
>
>
>
> As for requirements driving the ability to express a combination of
> profiles, we already have (in TTML1) and will have more (in TTML2) that
> permits a user to characterize processing requirements by means of a
> combination of existing profiles. Consequently, any shorthand signaling of
> first-order processor support needs to be able to repeat the expression of
> such combinations.
>
>
>
> I don't buy any "its too complex" argument thus far, primarily because
> nobody has stated what is (overly) complex in sufficient detail to
> understand if there is a problem or not.
>
>
>
> My perception of the TTML profile mechanism is that it is easy to
> understand and implement, and, further, that it is a heck of lot easier to
> understand and implement than XML Schemas.
>
>
>
>
>
> On Thu, May 15, 2014 at 9:58 AM, Nigel Megitt <nigel.megitt@bbc.co.uk>
> wrote:
>
> Agreed there's a gulf of understanding/expectation that we need to bridge.
>
>
>
> Can anyone volunteer to draft a set of requirements for this
> functionality, in the first instance being the smallest set needed to meet
> the ISO specs? (Mike, I guess I'm thinking of you, following our discussion
> at the weekly meeting earlier)
>
>
>
>
>
> On 15/05/2014 16:48, "Glenn Adams" <glenn@skynav.com> wrote:
>
>
>
>  i can see this subject is not going to be resolved easily as we clearly
> have a large gap about requirements; e.g., i think there are no
> requirements to signal content conformance, but only client processor
> requirements, i think we must use the TTML profile mechanism, etc
>
> On Thursday, May 15, 2014, Michael Dolan <mdolan@newtbt.com> wrote:
>
> Maybe "highly undesirable", but if we don't address the A + B signaling
> explicitly, then profiles need to be created for all the combinitorics of
> namespaces in practice. Not the end of the world, but virtually prevents
> the
> simple signaling of 3rd party namespaces already provided by the
> namespace/schemaLocation mechanism today. No I am not proposing we use that
> - I am pointing out a deficiency in this proposal that we already address
> today in 14496.
>
> Anyway, we need to go through the points in my email a week ago - if not
> today, then on the 29th.
>
>         Mike
>
> -----Original Message-----
> From: David Singer [mailto:singer@mac.com <singer@mac.com>]
> Sent: Thursday, May 15, 2014 5:20 AM
> To: Glenn Adams
> Cc: TTWG
> Subject: Re: Draft TTML Codecs Registry
>
> OK
>
> Though it will be a sub-parameter of the codecs parameter for the MP4 file
> type, from the point of view of TTML it's actually a profile short name
> registry rather than codecs registry, so I think we should rename it.
>
> the values here should be usable in both
> a) the profiles parameter for the TTML mime type
> b) the codecs parameter for the MP4 mime type
>
> so, also "named codecs" -> "named profiles"
>
>
>
> I agree with Cyril that we only need a single operator here (implement one
> of these profiles and you're good to go), both because we don't need the
> complexity, and because a "implement both/all of these" is effectively
> inviting file authors to make up new profiles ("to process this document
> you
> have to implement both A and B"), which is (IMHO) highly undesirable.
>
>
>
> On May 15, 2014, at 9:55 , Glenn Adams <glenn@skynav.com> wrote:
>
> > See [1].
> >
> > [1] https://www.w3.org/wiki/TTML/CodecsRegistry
>
> Dave Singer
>
> singer@mac.com
>
>
>
> ----------------------------
>
>
>
> http://www.bbc.co.uk
> This e-mail (and any attachments) is confidential and may contain personal
> views which are not the views of the BBC unless specifically stated.
> If you have received it in error, please delete it from your system.
> Do not use, copy or disclose the information in any way nor act in
> reliance on it and notify the sender immediately.
> Please note that the BBC monitors e-mails sent or received.
> Further communication will signify your consent to this.
>
> ---------------------
>
>
>
>
>
>
>
> This message may contain confidential and/or privileged information. If
> you are not the intended recipient you must not use, copy, disclose or take
> any action based on this message or any information herein. If you have
> received this message in error, please advise the sender immediately by
> reply e-mail and delete this message. Thank you for your cooperation.
> Screen Subtitling Systems Ltd. Registered in England No. 2596832.
> Registered Office: The Old Rectory, Claydon Church Lane, Claydon, Ipswich,
> Suffolk, IP6 0EQ
>
>   ­­
>
>
>
>
>
> This message may contain confidential and/or privileged information. If
> you are not the intended recipient you must not use, copy, disclose or take
> any action based on this message or any information herein. If you have
> received this message in error, please advise the sender immediately by
> reply e-mail and delete this message. Thank you for your cooperation.
> Screen Subtitling Systems Ltd. Registered in England No. 2596832.
> Registered Office: The Old Rectory, Claydon Church Lane, Claydon, Ipswich,
> Suffolk, IP6 0EQ
>
>   ­­
>
>
>
> This message may contain confidential and/or privileged information. If
> you are not the intended recipient you must not use, copy, disclose or take
> any action based on this message or any information herein. If you have
> received this message in error, please advise the sender immediately by
> reply e-mail and delete this message. Thank you for your cooperation.
> Screen Subtitling Systems Ltd. Registered in England No. 2596832.
> Registered Office: The Old Rectory, Claydon Church Lane, Claydon, Ipswich,
> Suffolk, IP6 0EQ
>
>   ­­
>
>
>
>
>
> This message may contain confidential and/or privileged information. If
> you are not the intended recipient you must not use, copy, disclose or take
> any action based on this message or any information herein. If you have
> received this message in error, please advise the sender immediately by
> reply e-mail and delete this message. Thank you for your cooperation.
> Screen Subtitling Systems Ltd. Registered in England No. 2596832.
> Registered Office: The Old Rectory, Claydon Church Lane, Claydon, Ipswich,
> Suffolk, IP6 0EQ
>
>   ­­
>
>
>
>  This message may contain confidential and/or privileged information. If
> you are not the intended recipient you must not use, copy, disclose or take
> any action based on this message or any information herein. If you have
> received this message in error, please advise the sender immediately by
> reply e-mail and delete this message. Thank you for your cooperation.
> Screen Subtitling Systems Ltd. Registered in England No. 2596832.
> Registered Office: The Old Rectory, Claydon Church Lane, Claydon, Ipswich,
> Suffolk, IP6 0EQ
>    ­­
>

Received on Tuesday, 20 May 2014 00:22:46 UTC