W3C home > Mailing lists > Public > public-tt@w3.org > May 2014

Re: Liaison response - template on MIME type parameter for TimedText

From: David Singer <singer@apple.com>
Date: Wed, 14 May 2014 08:59:37 +0200
Cc: Cyril Concolato <cyril.concolato@telecom-paristech.fr>, TTWG <public-tt@w3.org>
Message-id: <3909E9F2-BE47-47F5-BC2F-8F27269E62E3@apple.com>
To: Glenn Adams <glenn@skynav.com>

On May 13, 2014, at 19:46 , Glenn Adams <glenn@skynav.com> wrote:

> 
> On Tue, May 13, 2014 at 11:04 AM, David Singer <singer@apple.com> wrote:
> 
> On May 13, 2014, at 18:40 , Glenn Adams <glenn@skynav.com> wrote:
> 
> > Much of this is based on where we are with TTML1 and where folks want to go with TTML2. TTML1 defined only processor profiles, and explicitly ruled out treating them as content profiles. Now folks also want content profiles for labeling document conformance (and potentially selecting validation processing).
> >
> > We have a couple of paths here:
> >       • introduce a new concept and mechanism: content profiles, while reusing as much existing vocabulary as possible, e.g., reuse ttp:{profile,features,feature,extensions,extension} element types to support both uses;
> >       • merge both concepts and mechanisms into one concept/mechanism;
> > We have already started down the first of these paths. I haven't even considered fully what it would entail to do the second.
> >
> > Given the general confusion folks seem to have about whether a profile is defining content constraints or describing processor requirements, I strongly prefer the first path since it makes these concepts explicit and distinct. Following the second path, in my mind, would continue to maintain the confusion about the role (and semantics) of a profile.
> >
> > You appear to be implicitly arguing for the second path. I wonder what others feel about this.
> 
> It’s the way it’s done for video and audio codecs such as H.264 and H.265, file formats such as ISO BMFF, and distribution technologies such as DASH, and so on.
> 
> I have no problem with allowing intrinsic indicators such as schema locations, namespaces used, content profiles conformed to, or the marital status of the content author;  I just don’t think they are terribly relevant to the problem at hand, which is attempting to solve the question “can I (both ability and permission) process this document?”.  That’s what is needed when given a MIME type annotated with parameters; a canPlay decision.
> 
> In that case, we should only be discussing processor profiles and stop talking about content profiles, at least in the present thread.
> 
> Earlier in this thread, I see folks talking about "dialect naming" and namespaces used, etc. Both of those are related only to content profiling and not processor profiling (answering the canPlay question).

namespaces, yes. But surely SMPTE-TT  and its use of images implies a different processing capability, for example?

> 
> If folks can agree that we are only talking about processor profiles (the canPlay question), then I can reformulate the semantics of the suggested codecs parameter strictly in terms of processor profiles.
>  
> 
> >
> >
> >
> >
> >
> >
> > On Tue, May 13, 2014 at 9:54 AM, David Singer <singer@apple.com> wrote:
> >
> > On May 13, 2014, at 17:22 , Glenn Adams <glenn@skynav.com> wrote:
> >
> > >
> > > On Tue, May 13, 2014 at 3:20 AM, David Singer <singer@apple.com> wrote:
> > > Hi Glenn
> > >
> > > I still worry that you are making this much more complex than it needs to be.
> > >
> > > at first order, we need to understand the process precisely and understand the parameters involved, if for no other reason, than to document that process correctly; we can then choose to what extent we want to expose those parameters to the author/processor
> > >
> > >  In other places, we write that when a document says P,Q,R as the list of profiles, then:
> > >
> > > * the document contains everything that is required by any of P, Q, and R
> > > * the document contains nothing contrary to any of P, Q, and R
> > > * the document creator will be satisfied by a processor that implements exactly only the required processing behavior of any one of P, Q or R
> > >
> > > the first two constraints above correspond with the way I defined ttp:contentProfileCombination="mostRestrictive": prohibited > required > optional
> > >
> > > the third constraint requires something different than has previously been mentioned; so far, we have talked about inferring a PP from a single CP, namely the effective CP resulting from combining constituent CPs; however, here you suggest inferring a PP from any constituent CP of the ECP; this could be handled by using a second parameter
> >
> > Life gets a lot simpler if a profile has a definition of both the content and processing requirements, under a single label.
> >
> > >
> > > ttp:inferProcessorProfileSource = (combined|first) : combined
> > >
> > > where combined means use the combined ECP as source, and first means use the first constituent CP that, when mapped to a PP using the inferProcessorProfileMethod (renamed from the earlier inferProcessorProfile parameter - see below), that PP is supported by processor;
> > >
> > > so, let's say we rename the earlier parameter inferProcessorProfile to inferProcessorProfileMethod, thus ending up with the following parameters:
> > >
> > > ttp:inferProcessorProfileSource = (combined|first) : combined
> > > ttp:inferProcessorProfileMethod = (loose|strict) : loose
> > >
> > > the first of these determines which CP to use as the source for inferring the PP, and the second of these determines the mapping from the CP constraints to the PP constraints; namely, how 'optional' in the CP is mapped to the PP: if 'loose' then optional -> optional, and if 'strict', then optional -> required;
> > >
> > > given these parameters and the others mentioned before, the treatment you suggest for codecs filtering maps to:
> > >
> > > ttp:contentProfileCombination="mostRestrictive"
> > > ttp:inferProcessorProfileSource="first"
> > > ttp:inferProcessorProfileMethod=“loose"
> >
> > again, I think this is way too complex.
> >
> > The client needs to answer the question “can I process this?” in two respects “am I able?” and “am I allowed to?”.  I see little or no value in allowing content makers effectively to define new profiles by combining other ones; they should define the profile, and publish its identifier (and be prepared to defend it).  I also see little value in being able to say “this document conforms to CP X but you’re not allowed to process it even if if you implement PP X”, and clearly the opposite “this document does not conform to CP X but go ahead if you implement PP X” is fairly absurd.
> >
> > >
> > >
> > >
> > > Then a client examines the list, and answers the simple question: do I implement the processing requirements of at least one of the named profiles?
> > >
> > > It isn't quite as simple as that since the named profiles are naming content profiles and not processor profiles, so we have to map the former to the latter to ask a well-formed question.
> >
> > As I say, I don’t think this is a useful distinction.  Yes, profiles can have distinct rules for content and for processing (and often do), but having distinct names and concepts seems un-needed complexity.
> >
> >
> > >
> > > If yes, then process the document as best I can.  If no, this document is not for me.
> > >
> > > Note that the client is permitted to exceed the requirements of the profile(s) it supports, and also process items that are optional, extensions, and so on, but it must meet at least one profile.
> > >
> > > I think that the simple profile list you mention is along these lines, and I don’t think we need anything more complex than this.
> > >
> > > We have to deal with the overall consequences of the following requirements:
> > >       • distinguishing content profiles from processor profiles
> > >       • combining multiple constituent content profiles into a single effective content profile
> > >       • combining multiple constituent processor profiles into a single effective content profile
> > >       • inferring a processor profile from a content profile
> >
> > I don’t see the need for any of these, for the simple cases.
> >
> > > To satisfy these requirements, we appear to need at least the four parameters mentioned:
> > >
> > > ttp:contentProfileCombination
> > >
> > > determines how multiple content profiles are combined into a single combined content profile
> > >
> > > ttp:processorProfileCombination
> > >
> > > determines how multiple processor profiles are combined into a single combined processor profile (note that this parameter is called ttp:profileCombination in the current TTML2 draft, but probably should be renamed to make clear the type of profile and distinguish it from the ttp:contentProfileCombination parameter)
> > >
> > > ttp:inferProcessorProfileSource
> > >
> > > determine the source content profile to use when inferring a processor profile
> > >
> > > ttp:inferProcessorProfileMethod
> > >
> > > determine how to map feature/extension constraints in source content profile to feature/extension constraints in inferred processor profile
> > >
> > > I don't think we can simplify further without changing the requirements enumerated above.
> >
> > I think I need to understand what problems those requirements solve.
> >
> > >
> > > I agree that document requirements (must/must not/should/may…be present) and processing requirements (must/should process, must indicate an error if…) are distinct, and profiles generally document both of them.  But I think that in marking a document with a profile, you are implicitly buying into both of them;
> > >
> > > Agreed. This is a common understanding. However, what is not generally considered is how the two are distinct. In particular, does a feature that is optionally used in a content profile imply that support is optional or required in a corresponding processor profile?
> >
> > That needs stating in the profile definition.  “X may be present, and may be ignored in processing” or “X may be present, but if present must be processed” (or even “X may be present, but if present must be ignored”).
> >
> > > We can establish a default, which is what I've done above by making ttp:inferProcessorProfileMethod have a default value of 'loose', thus meaning an optional feature in a CP maps to optional support in a corresponding inferred PP.
> > >
> > > notably, if you have two profiles P, Q with the same document requirements but Q has better, stronger, processing requirements
> > >
> > > you are mixing CP and PP semantics in this statement; if P and Q specify document requirements, then they are CPs, and if they specify the same document requirements, then they are the same CP;
> > >
> > > if P == Q but IPP(P) != IPP(Q) [IPP = inferred processor profile], then the inference rules, i.e., the parameters of the function IPP(), must be distinct
> > >
> > > , and you feel that P-level processing is not good enough, then don’t mark P as a profile on the document, even though the document itself conforms to P — because you do not want P-level processors processing it.
> > >
> > > right, assuming "P-level processing" means IPP(P); but note that the syntax we are discussing for the codecs parameter only allows enumerating content profiles, and not processor profiles,
> >
> > as I say, I don’t see the utility of separate concepts here outside the profile definitions
> >
> > > then an inference method must produce the corresponding processor profiles for this pre-filtering stage; importantly: this (inability to explicitly enumerate processor profiles in the codecs syntax) does not apply internally to TTML documents or the actual processing of a TTML document, where the author may specify both CPs and PPs;
> > >
> > >
> > > I think that the simple TTML profile combination meets this, but I am not sure:
> > > > TTML1 already allows including multiple ttp:profile elements [1] and defines a hardwired combination method:
> > >
> > > can you confirm?
> > >
> > > yes, TTML1 does presently allow specifying multiple processor profiles (not content profiles) and uses a hardwired combination method that corresponds with the replace method defined in TTML2; however, this does not map to the process we discussed above with respect to CP combination and PP inference; so I think it won't be particularly useful;
> > >
> > > note that we can define the codecs syntax pre-filtering process in terms of TTML2 vocabulary and semantics, even when applied to TTML1 based documents (and profiles)
> > >
> > >
> > >
> > > On May 12, 2014, at 21:15 , Glenn Adams <glenn@skynav.com> wrote:
> > >
> > > >
> > > > On Mon, May 12, 2014 at 10:37 AM, David Singer <singer@apple.com> wrote:
> > > > Hi Glenn, comments and questions inline…
> > > >
> > > > On May 12, 2014, at 18:21 , Glenn Adams <glenn@skynav.com> wrote:
> > > >
> > > > >
> > > > > You say singular “the”, but a document can be conformant with more than one profile, can’t it?  How do I indicate that?
> > > > >
> > > > > In TTML1, it is not possible. However, we do have an open issue to add support to TTML2 to allow defining a profile by referencing multiple referenced profiles [1]. This mechanism may be used to refer to such a combined profile, where the profile designator makes reference to the definition of the combined profile, e.g., [using the mechanisms for defining processor profile]
> > > > >
> > > > > #1 referencing a combination processor profile
> > > >
> > > > Got it, but that doesn’t enable a content author, but a profile definer…
> > > >
> > > > > #2 referencing multiple processor profiles from ttp:profile attribute
> > > > >
> > > > > <tt ttp:profile="http://example.com/ttml/profile/A http://example.com/ttml/profile/B http://example.com/ttml/profile/C" ttp:profileCombination="leastRestrictive”>
> > > >
> > > > yes, that works
> > > >
> > > > > #3 embedding multiple processor profiles with ttp:profile element
> > > > >
> > > > > <tt ttp:profileCombination="leastRestrictive">
> > > > > <head>
> > > > > <ttp:profile use="http://example.com/ttml/profile/A"/>
> > > > > <ttp:profile use="http://example.com/ttml/profile/B"/>
> > > > > <ttp:profile use="http://example.com/ttml/profile/C"/>
> > > > > </head>
> > > > > ...
> > > > > </tt>
> > > >
> > > > that works too
> > > >
> > > > >
> > > > > note that TTML1 already allows this type of combination profile definition but defines a hard-wired (rather than author specified) combination method
> > > >
> > > > sorry, you lost me
> > > >
> > > > TTML1 already allows including multiple ttp:profile elements [1] and defines a hardwired combination method:
> > > >
> > > > If more than one ttp:profile element appears in a Document Instance, then all specified profiles apply simultaneously. In such a case, if some feature or some extension is specified by one profile to be used (mandatory and enabled) and by another profile to be required (mandatory) or optional (voluntary), then that feature or extension must be considered to be used (mandatory and enabled); if some feature or some extension is specified by one profile to be merely required (mandatory) and by another profile to be optional (voluntary), then that feature or extension must be considered to be required (mandatory).
> > > >
> > > > This is equivalent to specifying ttp:profileCombination="replace" as currently defined in TTML2 [2].
> > > >
> > > > [1] http://www.w3.org/TR/2013/REC-ttml1-20130924/#vocabulary-profiles
> > > > [2] https://dvcs.w3.org/hg/ttml/raw-file/tip/ttml2/spec/ttml2.html#parameter-attribute-profileCombination
> > > >
> > > >
> > > >
> > > > > However, we need to be clear about the purpose of using a profile here. It is *not* to specify conformance, at least from the way I see people discussing this matter. Rather, it is to specify what processor profile is required to process the document. In other words, what features must be supported by processor according to author's requirements. This is distinct from what profile(s) the document conforms to. For example, a document may conform to a profile in which a feature is optionally used, but then require that feature be supported in order for it to be processed.
> > > >
> > > > I am not sure I get the distinction.
> > > >
> > > > * This processor profile is required to process the document.
> > > >
> > > > If this can be rephrased as follows, then yes:
> > > >
> > > > "A processor must abort processing (unless overridden) when the effective processor profile specifies a feature/extension is required and the processor does not support that feature/extension."
> > > >
> > > > where "effective processor profile" is the result of combining all processor profiles referenced/defined by a document, and the method of combination is specified by ttp:profileCombination.
> > > >
> > > > * This document conforms to this profile.
> > > >
> > > > If this can be rephrased as follows, then yes:
> > > >
> > > > "A document declares it satisfies (or otherwise conforms with) the effective content profile. In addition, in the absence of declared processor profile, a processor may infer a processor profile from this effective content profile."
> > > >
> > > > where "effective content profile" is the result of combining all content profiles referenced/defined by a document, and the method of combination is specified by ttp:contentProfileCombination.
> > > >
> > > >
> > > > What is the practical difference here, for a client trying to decide “I support profiles X, Y, Z; can/should I process this document?”.
> > > >
> > > > To answer this question in general, the client must determine the effective processor profile by combining all referenced/included/inferred processor profiles. A content profile declaration would only apply in the absence of an explicitly referenced or included processor profile, i.e., only when it is necessary to infer a processor profile from the effective content profile.
> > > >
> > > > Isn’t it just a question of how conformance is defined (as a format question or as a processing question)?
> > > >
> > > > The definitions of content conformance and processor conformance are distinct.
> > > >
> > > > A document may (or not) conform to a content profile, a determination that can be made by a content validator/verifier using some set of specifications, including, e.g., schemas, custom verification tools, etc.
> > > >
> > > > A processor on the other hand doesn't, strictly speaking, conform to a processor profile. It conforms to general semantic requirements of TTML, e.g., the ability to compute a document's effective processor profile and test whether it (the processor) supports that profile's required features/extensions. Thus, it is better to ask whether a processor "supports" or "satisfies" a given processor profile, and not whether a processor "conforms" with a processor profile.
> > > >
> > > > Also, with respect to content profiles, it is better to ask whether a processor "supports a processor profile implied by (inferred from) a content profile".
> > > >
> > > > Why do these concepts of processor profile and content profile need to be distinct? It is best to give an example:
> > > >
> > > > Let's say that content profile C defines a feature F to be optional, meaning it may but need not be present. By itself this doesn't say anything about whether a processor must support F. Now, an author may decide to use F in some documents but not others, all of which conform to C. Now, let's say an author wants all processors that may process these documents to be able to correctly support F if it is present. So the author defines a processor profile P that specifies that support for F is required. Now this P is not the same as C, since the former says (support for) F is required, and the latter says (use of) F is optional.
> > > >
> > > > If the author were to only make reference to content profile C, and not reference a processor profile, then a processor profile would need to be inferred from C, in which case a determination must be made as to whether support for F is required or optional. Depending on how we define this default inference process, the inferred processor profile may or may not meet the original requirements of the author (that F must be supported by processor whether or not F is used in a document), in which case the author might need to specify the suggested ttp:inferProcessorProfile.
> > > >
> > > > So to summarize, the author has the following options (without considering an external codecs/profile hint):
> > > >
> > > > #1 declare only a processor profile;
> > > > #2 declare only a content profile,  in which case a processor profile is inferred at processing time;
> > > > #3 declare both processor and content profiles;
> > > > #4 declare neither processor nor content profile, in which case a default processor profile is determined by the document interchange context, or, if no context or the context doesn't specify a default, then choose a default based on the type of processing (transform vs presentation) and the version of TTML that applies;
> > > >
> > > > When processing on client, the processor looks only at the processor profile, unless it is performing validation processing, in which case it would look at the content profile. If there is no declared content profile, then no validation is possible, i.e., content profile is never inferred from processor profile, etc.
> > > >
> > > >
> > > > * Any one of these processor profiles are required to process the document.
> > > > * This document conforms to these profiles.
> > > >
> > > > and these are the same with higher cardinality.
> > > >
> > > > > The only utility of a statement of content profile conformance is to (1) perform validation processing, and/or (2) to imply a processor profile in the absence of an explicit declaration of processor profile. From what I can tell in this discussion, folks are primarily thinking about the second of these uses of a content profile conformance declaration. Furthermore, it appears that, in regard to discussing references to multiple content profiles, folks are assuming that a disjunction combinator applies; namely, that the least restrictive expression of any given feature usage requirement would apply to creating a corresponding processor support requirement.
> > > > >
> > > > > For example, say we have three content profiles P, Q, and R that define one feature F, where P makes F prohibited, Q makes F optional, and R makes F required.
> > > >
> > > > Then it’s only possible to make a document that conforms to 2 of them (F is absent: PQ;  F is present; QR).
> > > >
> > > > "conforms simultaneously to 2 of them"
> > > >
> > > >
> > > > > If we then had an expression of conformance (where "leastRestrictive" profile is similar to an an "or" or "union" operation), e.g.,
> > > > >
> > > > > <tt ttp:contentProfile="P Q R" ttp:contentProfileCombination="leastRestrictive”/>
> > > >
> > > > No, stop, we’re not asking that.
> > > >
> > > > <tt ttp:contentProfile="P Q R" ttp:contentProfileCombination="leastRestrictive”/>
> > > >
> > > > would mean
> > > > a) everything in the document is either permitted by P Q and R (or is ignorable — permitted ignorable stuff under P Q and R)
> > > >
> > > > actually, leastRestrictive is defined to mean that when merging values component-wise for a given feature F (or extension E), then choose the least restrictive value (optional > required > prohibited);
> > > >
> > > > so for the example, the combination of P, Q, R would be F: optional; if said document does use F, then it conforms to the combination, but not to P; if said document does not use F, then it conforms to the combination, but not to R
> > > >
> > > > in contrast ttp:contentProfileCombination="mostRestrictive” means choose most restrictive (prohibited > required > optional);
> > > >
> > > > so for the example, the combination of P, Q, R would be F: prohibited; if said document does use F, then it does not conform to the combination, but does conform individually to Q and R; if said document does not use F, then it conforms to the combination, but not to R individually;
> > > >
> > > > b) there is nothing in the document contrary to any of P Q or R
> > > >
> > > > it is up to the author to determine how they want to declare conformance; in this (admittedly) complex example, the author has declared conformance with two mutually conflicting content profiles, P and R; the combination methods are defined to produce reproducible results, which they do accomplish;
> > > >
> > > >
> > > > c) if you implement at least one of P Q or R, then you can process the document, ignoring stuff that is not in the profile(s) you support, and the result is OK by the content author.
> > > >
> > > > in the above example, the author has only declared a (set of) content profile(s), so it will be necessary to infer a processor profile from the effective content profile, i.e., the content profile produced by performing the content profile combination method on the declared content profiles;
> > > >
> > > > if we alter our example and say that P, Q, and R are not pair-wise mutually conflicting, and ttp:inferProcessProfile is not specified, in which case it would default to 'loose', then we would end up with an effective processor profile that requires support for a feature F only if all of P, Q, and R require support for F, otherwise support for F is optional;
> > > >
> > > > if ttp:contentProfileCombination were specified as 'mostRestrictive', then the same configuration would require support for a feature F if any of P, Q, or R require support for F, otherwise support for F is optional;
> > > >
> > > >
> > > > >
> > > > > I understand this desire. However, it is also clear that we are not going to define a calculus for use in a codecs parameter that is equivalent (in terms of expressibility) with the formal definition mechanism for processor and content profiles in TTML2.
> > > >
> > > > I don’t see any calculus required.
> > > >
> > > > If we define the syntax P+Q+R as a suffix of codecs (following stpp.ttml, e.g., "stpp.ttml.P+Q+R") to mean:
> > > >       • P, Q, and R are identifiers that map to pair-wise non-conflicting content profiles;
> > > >       • determine an effective processor profile as follows:
> > > >               • compute effective content profile ECP by combining P, Q, and R using content profile combination method "leastRestrictive"
> > > >               • infer an effective processor profile EPP from ECP using the "loose" inference method
> > > >       • if EPP contains a required feature/extension that is not supported by the processor, then do not fetch resource
> > > >       • otherwise, fetch resource and commence processing of profile declarations contained in resource (in tt:tt and tt:head elements);
> > > >       • if processing of profile declarations contained in resource result in processing being aborted, then do not fetch remainder of resource and continue to next candidate resource
> > > > Using this method, the codecs parameter could serve as a pre-filter on fetching the resource and the the full TTML profile processing semantics (based on declarations found in the resource) could occur next. So there are two opportunities to reject and pass over a candidate resource in a list of resources: once using the codecs parameter, and once using the TTML resource's profile declarations.
> > > >
> > > > A conservative client can ignore the codecs parameter and fetch each resource to perform the TTML specified processor profile semantics; a liberal client can perform the codecs base pre-filtering step and reject fetches without performing the TTML specified profile processing.
> > > >
> > > > The above procedure is a "limited calculus" that would appear to expose the desirable filtering while not exposing the need to fetch or perform the full TTML profile computations (in the pre-fetch state).
> > > >
> > > > Look, we have a number of TTML profiles already with significant overlap.  If I write a document that stays in that intersection, someone implementing EBU TT, SMPTE TT, W3C DFXP, and so on, should all be fine.  I suspect many simple cases will fall into this intersection.  Documents ought to be able to say so.
> > > >
> > > > I agree they should be able to do this internally, and they certainly will be able to do so in a fuller manner in the TTML2 context. Not so much in TTML1.
> > > >
> > > > Defining the codecs parameter and a higher level (fetch filtering) proposal such as described above should be able to handle this both for existing TTML1 and new TTML2 resources.
> > > >
> > > >
> > > >
> > > > David Singer
> > > > Manager, Software Standards, Apple Inc.
> > > >
> > > >
> > >
> > > David Singer
> > > Manager, Software Standards, Apple Inc.
> > >
> > >
> >
> > David Singer
> > Manager, Software Standards, Apple Inc.
> >
> >
> 
> David Singer
> Manager, Software Standards, Apple Inc.
> 
> 

David Singer
Manager, Software Standards, Apple Inc.
Received on Wednesday, 14 May 2014 07:00:06 UTC

This archive was generated by hypermail 2.3.1 : Thursday, 5 October 2017 18:24:15 UTC