RE: Draft TTML Codecs Registry - Issue-305

Hi Glenn,

CIL (green)

Best regards,
John

John Birch | Strategic Partnerships Manager | Screen
Main Line : +44 1473 831700 | Ext : 2208 | Direct Dial : +44 1473 834532
Mobile : +44 7919 558380 | Fax : +44 1473 830078
John.Birch@screensystems.tv<mailto:John.Birch@screensystems.tv> | www.screensystems.tv<http://www.screensystems.tv> | https://twitter.com/screensystems


Visit us at
Broadcast Asia, Marina Bay Sands, Singapore 17-20 June, Stand 5E4-01

P Before printing, think about the environment

From: Glenn Adams [mailto:glenn@skynav.com]
Sent: 19 May 2014 12:00
To: John Birch
Cc: nigel.megitt@bbc.co.uk; mdolan@newtbt.com; public-tt@w3.org
Subject: Re: Draft TTML Codecs Registry - Issue-305



On Mon, May 19, 2014 at 6:24 PM, John Birch <John.Birch@screensystems.tv<mailto:John.Birch@screensystems.tv>> wrote:
Hi Glenn,

For me Validation is the highest priority. Current workflows for audio / visual presentations, regardless of delivery mechanism, all rely upon validation of the content.

ok, i have no problem with your priority; however, nothing to date in TTML defines validation processing, so you seem to be articulating a priority that isn't justified by the spec
it is justified by current practises that involve TTML use.

The ground rule is that all the content is validated and therefore will work on a mandatorily compliant processor / decoder that must include the features required for the content profile …

that is a wrong assumption; i have said it before and will say it again... an feature that is optional in content may or may not be supported by a processor; so saying a document is valid makes no statement about whether a processor supports the features used by the document;
This is the ground rule used by current content creators for AV material. I’m not saying this is the rule for TTML. The ground rule is that a processor MUST support features used by the specification and by extension the documents.

There is no scope (or concept) of the decoding processor needing to make decisions about whether it can process the content, or what it should do with parts of the content it does not understand.

i'm not sure what you mean by this, since it flies in the face of what is specified in TTML1;
Again talking about AV content presentation in general


These concepts are superfluous non-issues when a) all content is validated to an agreed specification… b) all processors / decoders are expected to handle all features of that specification, c) any intentional graceful degradation of capabilities (features) is already built into the specification.

it is not true that all processors are expected to handle all features of a specification; if you have made that assumption, then you are being presumptuous
My presumption is that made by practical implementers of very many content distribution systems.  In fact, the Internet is the ‘special’ case, where it is accepted that a user / viewer may need to modify their environment (processor / browser) to suit the content.
Almost every timed text deployment based upon TTML1 is working within the presumptions / assumptions of the broadcast world, where making the content to an agreed standard and having a standard processor / decoder is the accepted practise.
I do not upgrade my TV to watch a specific program – nor should I have to ‘chase my timed text implementation around the web’ to enjoy subtitles or captions!
My personal goal is to get rid of the flexibility that you wish to permit… since I believe that flexibility has little practical merit from the perspective of the user / consumer.

Thus, for me the two crucial priorities are a mechanism to allow effective validation and a mechanism to allow effective specification (by sub-setting of TTML features and the usage of those features).

ok, stay tuned on the former; as for the latter, you need to be more specific about what you see missing if you want me to address it
I think the latter is to a degree handled by other schema mechanisms, but I believe there is also a need to connect both schema and features profiles in a specification to form a content profile. I.e. define both the structure and the functionality.


I appreciate that you are tackling the generic case of an ‘unknown provenance content’ being decoded by a ‘processor with unknown (to the author) capabilities’.

that is not what i'm tackling
It’s what I am reading ;-) but that’s probably my fault…

However, deployed standards derived from TTML operate in deliberately constrained patterns, patterns that intentionally reduce the potential for mismatches between content creation and content presentation.

IMO, that is a naive pov
Possibly, but I deal in a world of naïve people who just want to consume content, and naïve people who just want to publish content that they can be certain will display on the consumers devices. Having a system that allows a device to tell the user it can’t play because feature X is not implemented is not an ideal scenario… imagine if Blu-ray disks worked in the manner that you describe. The Blu-ray analogy is a good analogy for what I believe is needed. A clear specification for the naïve guys who press the disks, and for the people who make the disks… and a clear labelling scheme for the people who buy and play the disks.


RE: Now, if I author content using tts:opacity in a way that conforms to a content profile, then that says nothing whatsoever about whether a presentation processor will actually implement and use that opacity when presenting content. This is why a process profile declaration is important. It allows me to say: only present this on a processor that understands and uses opacity. This is not a specious use case.

I cannot imagine a situation (in deployed AV broadcast workflows using TTML derived standards) where a content author would actually use a feature that might not be present in all the targeted decoders…

nothing about defining a content profile implies a decoder must support every feature potentially used by that content profile; if you are assuming the opposite, then you are presumptuous or naïve
Or I’m just trying to ensure that we get the widest range of content played on the widest range of players. (which is my definition of interoperability).

the need for ‘content to present’ would pretty much dictate the use of the lowest common denominator of features that the author has experienced as working in the majority of decoders. This has been proven in practise. So I think the above is only half the use case, the second part of this use case would be provide a mechanism for alternative behaviour.

i'm not sure what this means


I.e., should an author deliberately wish to use advanced features, I believe they would do so only if there was a clear ‘fall back strategy’, this would either be implemented as part of the specification… e.g. “opacity if not supported should be presented as fully opaque”, or by putting a switch in the content along the lines of “if processor supports opacity then X, else Y”.

TTML has no conditional/switch mechanism at present, though it does specify fallback semantics for each renderable style property

A classic example of a fall back strategies deployed in previous timed text implementations is the Teletext mechanism for extending the character set of basic Teletext to include accented characters. In this mechanism a basic level ‘base character’ is transmitted, and then the extended accent character is transmitted… the extended characters have an intrinsic backspace, so advanced decoders overwrite the base character… basic decoders do not eat the extended characters so they remain displaying the base character.

ok, but not relevant
I find it strange that you wish to discount strategies that have been the accepted practise for timed text implementations for more years than the Internet has been in existence ;-) The way Teletext works was derived by practical experience and pragmatism… just because the constraint of easily modifying the decoder has been lifted on the Internet does not necessarily imply that users wish to follow such a path to access content.


RE: Furthermore, if a document that uses opacity does not declare (or otherwise signal) a processor profile, but only a content profile, then a process that infers the former from the latter is ambiguous (without further directives) since it is not a logical conclusion that support for opacity must be present in a processor when the use of opacity in content is optional. This is where I suggested in a previous thread the need for something like:

I see no strong use case for supporting a situation where a content profile defines a feature as optional and processors are then free to either support that feature or not.

well, that's the way it is; in TTML1, there is no such thing as a content profile anyway; in TTML2 one will be able to define both processor profiles and content profiles and then associate a document with these profiles; however, these profiles make different kinds of statements; one will be able to infer a processor profile from a content profile, but the default mapping will map an optional content feature to optional processor support, and not optional content feature to required processor support; there will be a parameter that can be used to select the latter mapping, but the former will be the default mapping;

If processing support is detached from content in the manner you describe then valid content will fail to present on valid decoders. That seems an absurd default position.
This would result in a situation where some content that conforms to the content profile would display correctly (by which I mean as intended) on some compliant processors, but not display correctly on other processors that could still be called compliant.

that's right;
Now that’s absurd.

That is not a route to ‘interoperability of content’.

you are confusing content interoperability with processing behavior interoperability

So I contend that it is ‘logical’ to conclude as a practical implementer that if a feature is optional in content I should support it in my processor, since ‘logically’ I may be expected to decode content that may include that feature.

that is one possible conclusion, but not one that matches how the Web works; Web specifications define many optional features and processors may or may not do anything with them; the purpose of the TTML1 and following processor profile is to allow that author to ask for more than this default

Ahh, now we are getting to the point. Yes, I understand the web works this way (currently). But it’s not the only way to work. And when the world of consumer equipment meets the web, I’m betting that (at least within the scope of entertainment) the CE world will win. In fact I would argue that the strength of a certain manufacturer’s products (fruit rhymes with grapple) is based upon this CE strategy. There is no point (in an AV context) for TTML working the way the web works. Consumers and content creators / sellers just want guarantees that it will work. Providing options to authors is pointless since they will be unused if there is no expectation of enforcement. CEA 708 should be enough of an illustration of that.


I can however see some small utility in being able to say ‘this feature is not used in this document instance’ – since that might allow a processor to optimise itself for processing that document instance, (e.g. by not loading heavy libraries of code etc.). However, I am unconvinced that most processor implementations would dynamically adapt to each document instance.

they won't, but by distinguishing processor profile support from content profile usage, one can begin to have a concrete way of specifying requirements for decoding/processing behavior instead of making assumptions about how a processor supports optional content features


RE my closing comment “"strategy for TTML2 that requires general web access, etc." – what I meant is that any new capabilities in TTML2 that are defined to address these issues need to be non-referential. I.e. it must be possible to have content that can be validated independently of the Internet, and processors that can determine the requirements to decode any content without access to the internet.

one has to distinguish between referencing an external resource, such as an externally defined profile specification, and mandating that the external resource be fetched and used during processing; TTML1 and 2 makes the former possible, but doesn't mandate the latter; so nothing is changing in that regard;

Further, the process of determining processing requirements should be lightweight, since the processors are in many deployed cases likely to be operating in constrained environments.

determining if a processor supports a specified processor profile is a much smaller overhead than validation of a content profile (as you assign priority); at the lower boundary, a decoder need only evaluate a ttp:profile attribute and determine if its value lies in a list of supported processor profiles; simple with hardly any overhead;


With best regards,
John

John Birch | Strategic Partnerships Manager | Screen
Main Line : +44 1473 831700 | Ext : 2208 | Direct Dial : +44 1473 834532
Mobile : +44 7919 558380 | Fax : +44 1473 830078
John.Birch@screensystems.tv<mailto:John.Birch@screensystems.tv> | www.screensystems.tv<http://www.screensystems.tv> | https://twitter.com/screensystems


Visit us at
Broadcast Asia, Marina Bay Sands, Singapore 17-20 June, Stand 5E4-01

P Before printing, think about the environment

From: Glenn Adams [mailto:glenn@skynav.com<mailto:glenn@skynav.com>]
Sent: 18 May 2014 22:46
To: John Birch
Cc: nigel.megitt@bbc.co.uk<mailto:nigel.megitt@bbc.co.uk>; mdolan@newtbt.com<mailto:mdolan@newtbt.com>; public-tt@w3.org<mailto:public-tt@w3.org>

Subject: Re: Draft TTML Codecs Registry - Issue-305


On Sun, May 18, 2014 at 5:39 PM, John Birch <John.Birch@screensystems.tv<mailto:John.Birch@screensystems.tv>> wrote:
An extremely important context for content profile is for an 'application' (validator) to be able to determine if any given document conforms to a specific profile. Note this is not necessarily the same as that application being able to decode or present the document.

I've already stated that validation is a use case for content profile. However, we have yet to specify validation semantics for a TTML Content Processor, though it is on my list to add a @validation property.


In fact the 'processor A can determine if it can decode document X' debate is somewhat specious, since (at least in the context of most current TTML derived specifications) most processors should be able to safely assume that the documents they are 'asked' to decode conform to a specific 'standard', having been passed through a validation step before being presented.

The ability to decode is not as important as the ability to process the explicit/implied semantics of a particular feature/extension. All TTML processors must be able to instantiate an abstract TTML document instance, so, in that sense, every processor can decode any TTML profile. It is whether they do something with what they decode that is relevant.

It is not specious to state that a content author may wish to ensure that a processor will support (respect) the semantics of some feature. For example, take the #opacity feature, as denoted by the tts:opacity style attribute. This is an optional feature in all content profiles defined thus far. Optional in the sense that it may be used but need not be used.

Now, if I author content using tts:opacity in a way that conforms to a content profile, then that says nothing whatsoever about whether a presentation processor will actually implement and use that opacity when presenting content. This is why a process profile declaration is important. It allows me to say: only present this on a processor that understands and uses opacity. This is not a specious use case.

Furthermore, if a document that uses opacity does not declare (or otherwise signal) a processor profile, but only a content profile, then a process that infers the former from the latter is ambiguous (without further directives) since it is not a logical conclusion that support for opacity must be present in a processor when the use of opacity in content is optional. This is where I suggested in a previous thread the need for something like:

ttp:inferProcessorProfileMethod = (loose|strict) : loose

where loose maps optional content features to optional support in processor profile, and strict maps optional in content to required in processor. The fact that such a directive is required demonstrates that content profiles and processor profiles are different and must be treated so.




Why, because typical current TTML decoders are operating in constrained environments. Usually restricted access to the web. Speed expectations of decoding and limited Cpu and memory. IMHO a strategy for TTML2 that requires general web access, or anticipates building upon extensive web infrastructures and code bases will not resolve the issues faces by current TTML implementations in Smart TV, disc/pvr players or set-top boxes.

I'm not sure where this latter sentence is coming from. What do you refer to when you say "strategy for TTML2 that requires general web access, etc."?


Best regards,
John


From: Glenn Adams [mailto:glenn@skynav.com<mailto:glenn@skynav.com>]
Sent: Sunday, May 18, 2014 02:39 AM GMT Standard Time
To: Nigel Megitt <nigel.megitt@bbc.co.uk<mailto:nigel.megitt@bbc.co.uk>>
Cc: Michael Dolan <mdolan@newtbt.com<mailto:mdolan@newtbt.com>>; TTWG <public-tt@w3.org<mailto:public-tt@w3.org>>
Subject: Re: Draft TTML Codecs Registry - Issue-305


On Fri, May 16, 2014 at 2:51 AM, Nigel Megitt <nigel.megitt@bbc.co.uk<mailto:nigel.megitt@bbc.co.uk>> wrote:
On 15/05/2014 23:45, "Glenn Adams" <glenn@skynav.com<mailto:glenn@skynav.com>> wrote:

Could you cite the exact documents/sections that you are referring to by "quoted text"?

I was referring to the text from ISO/IEC 14496-12, AMD2 that Mike included in his email.

I assume you refer to:

From 14496-12, AMD2:

namespace is a null-terminated field consisting of a space-separated list, in UTF-8 characters, of
one or more XML namespaces to which the sample documents conform. When used for metadata,
this is needed for identifying its type, e.g. gBSD or AQoS [MPEG-21-7] and for decoding using XML
aware encoding mechanisms such as BiM.

schema_location is an optional null-terminated field consisting of a space-separated list, in UTF-
8 characters, of zero or more URL’s for XML schema(s) to which the sample document conforms. If
there is one namespace and one schema, then this field shall be the URL of the one schema. If there
is more than one namespace, then the syntax of this field shall adhere to that for xsi:schemaLocation
attribute as defined by [XML]. When used for metadata, this is needed for decoding of the timed
metadata by XML aware encoding mechanisms such as BiM.

This tells me nothing of why one would want to signal content profile or why one would want to communicate namespace usage separately (from XMLNS declarations found in the document).




Regarding

The processing behaviour may or may not be expressed in terms of TTML1-style profile features. There's no other language other than prose available for this purpose (that I know of).

If a specification defines processing semantics that must be supported in order for a processor to conform to the specification, and if that specification does not define any feature/extension, then I would firstly view that as a broken specification; however, another potential interpretation is that the specification implies an otherwise unnamed feature/extension whose feature/extension designation corresponds to the profile designation. That is, the profile designation serves as a high-level, un-subdivided designation of the set of semantics mandated by compliance with the defined profile.

Concerning 'broken' I note also TTML1SE §3.3 [1] does require an implementation compliance statement (ICS) to support claims of compliance – it would seem reasonable to require this as an input to the registration process. Or in TTML2 weaken this requirement.

[1] http://www.w3.org/TR/ttml1/#claims



This might be a way out of this without having to have such specifications define individual, fine-grained feature/extension designations.

Yes, that would be helpful to lower the barrier to entry.


Anyway, I'm still waiting for a someone to articulate a use case for signaling a content profile, or any aspect of a content profile (e.g., namespaces, schemas).

Did Mike's email including the relevant sections from 14496-12 not do this?

No, it does not. I repeat, signaling content profile can only have two purposes in the context of decoding/processing as far as I can tell:

(1) to validate incoming document, which is not yet done by any TTML processor, though we are looking at adding a @validation attribute in TTML2 that could be used to require this;

(2) to imply a processor (decoder) profile in lieu of explicitly signaling of a processor profile;

In the context of the current thread, it seems only the second of these is potentially relevant. However, I have to ask why one wouldn't simply signal a processor profile instead of using a more complex process of signaling a content profile and then having the decoder/processor infer a processor profile from that content profile.

If there are other reasons for signaling content profile (in the context of the current thread) then I haven't seen them articulated.






On Thu, May 15, 2014 at 1:28 PM, Nigel Megitt <nigel.megitt@bbc.co.uk<mailto:nigel.megitt@bbc.co.uk>> wrote:
Since namespaces and schemas define and constrain document contents without defining processing behaviour the quoted text defines a content profile declaration. It isn't asking for anything concerning specific processor capabilities but is merely describing  the contents of the document. The information may be used for downstream processing by context aware processors. The reference to namespace-aware compression makes clear that the mapping from whatever label scheme we choose to namespaces and schemas is important.

However it's clear that we expect the receiving system to use the information to direct its processing, as described previously.

Consider that the specification of a TTML variant x consists of the union of a content profile Cx and a description of processing behaviour Bx, which I'll express as S = C + B. The content profile shall itself reference one or more namespaces and schema locations. The processing behaviour may or may not be expressed in terms of TTML1-style profile features. There's no other language other than prose available for this purpose (that I know of).

It is possible to define two specifications S1 and S2 where S1 = Cx + Bx and S2 = Cx + By, i.e. the same contents are processed with different behaviour. By the quoted text there is no need to differentiate between them from an ISO 14496 perspective. However we understand from our knowledge of the problem space that it may be useful to signal to a receiving system which behaviour set is desirable. And it may be helpful in a receiving system to differentiate between the available behaviours in order to provide the optimal experience.

Would it be contrary to the spirit of the ISO wording to assign short labels each corresponding to some Specification, and for receiving systems to be expected to dereference (using a cached lookup table!) from those labels to the namespaces and schema locations contained within that specification's content profile? This would satisfy the ISO requirements and permit us to signal additionally the processor features and behaviours. At this stage the expression of those is not our concern – just that there is a document somewhere that describes how the implementation should work.

Going back to the previous example, if a document conforms to Cx then it could be signalled either as S1 or S2 or both, and if the content provider has verified that presentation will be acceptable either way then both S1 and S2 would be declared, otherwise just one of them (or neither if there's some other Sn that also uses Cx).

With this scheme combinatorial logic wouldn't really make sense – you could infer something about unions and intersections of content profiles but since the language used to describe processor behaviours can't be mandated (okay it could in theory, but it wouldn't be accepted in practice) it wouldn't be a well defined operation. Incidentally this is in no way a critique of the effort put in by Glenn, and its outcomes, in terms of defining content and processor profiles – though it might be nice to verify that this simple expression can be expanded into that scheme should a specification writer choose to do so.

This implies that every combination of content profiles and behaviours must be considered carefully and registered as a new specification with a new label. It also implies that if a document declares conformance with a set of specifications then it must conform to every member of the set of content profiles and it may be processed according to any one of the set of processing behaviours.

The expression of that set is as described previously, where we pick our favourite delimiter out of a hat made out of ampersands.

Also: this topic was discussed in summary briefly on the call today and a new suggestion arose, that some guidance for 'reasons why the TTWG would reject an application for registration' would be helpful. When requiring combinations to be registered separately there's a greater need to ensure that the registration process is quick and painless, and this guidance would help us and those who may follow to expedite it.

Nigel


On 15/05/2014 18:00, "Michael Dolan" <mdolan@newtbt.com<mailto:mdolan@newtbt.com>> wrote:

I believe the problem statement is to replace the potentially unwieldy long strings in the namespace & schema_location fields defined in 14496-12 and 14496-30 with a more compact string suitable for the DASH manifest codecs field.

From 14496-12, AMD2:

namespace is a null-terminated field consisting of a space-separated list, in UTF-8 characters, of
one or more XML namespaces to which the sample documents conform. When used for metadata,
this is needed for identifying its type, e.g. gBSD or AQoS [MPEG-21-7] and for decoding using XML
aware encoding mechanisms such as BiM.

schema_location is an optional null-terminated field consisting of a space-separated list, in UTF-
8 characters, of zero or more URL’s for XML schema(s) to which the sample document conforms. If
there is one namespace and one schema, then this field shall be the URL of the one schema. If there
is more than one namespace, then the syntax of this field shall adhere to that for xsi:schemaLocation
attribute as defined by [XML]. When used for metadata, this is needed for decoding of the timed
metadata by XML aware encoding mechanisms such as BiM.

I’m warming up to the idea of requiring TTML content profiles be created for the combinations.

                Mike

John Birch | Strategic Partnerships Manager | Screen
Main Line : +44 1473 831700 | Ext : 2208 | Direct Dial : +44 1473 834532
Mobile : +44 7919 558380 | Fax : +44 1473 830078
John.Birch@screensystems.tv<mailto:John.Birch@screensystems.tv> | www.screensystems.tv<http://www.screensystems.tv> | https://twitter.com/screensystems


Visit us at
Broadcast Asia, Marina Bay Sands, Singapore 17-20 June, Stand 5E4-01

P Before printing, think about the environment

From: Glenn Adams [mailto:glenn@skynav.com]
Sent: Thursday, May 15, 2014 9:15 AM
To: Nigel Megitt
Cc: Michael Dolan; TTWG
Subject: Re: Draft TTML Codecs Registry

My understanding from Dave was that the problem is how to answer the following method:

boolean canPlay(String contentTypeWithParameters)

I have not seen any statement of a problem that relates to signaling content conformance.

As for requirements driving the ability to express a combination of profiles, we already have (in TTML1) and will have more (in TTML2) that permits a user to characterize processing requirements by means of a combination of existing profiles. Consequently, any shorthand signaling of first-order processor support needs to be able to repeat the expression of such combinations.

I don't buy any "its too complex" argument thus far, primarily because nobody has stated what is (overly) complex in sufficient detail to understand if there is a problem or not.

My perception of the TTML profile mechanism is that it is easy to understand and implement, and, further, that it is a heck of lot easier to understand and implement than XML Schemas.


On Thu, May 15, 2014 at 9:58 AM, Nigel Megitt <nigel.megitt@bbc.co.uk<mailto:nigel.megitt@bbc.co.uk>> wrote:
Agreed there's a gulf of understanding/expectation that we need to bridge.

Can anyone volunteer to draft a set of requirements for this functionality, in the first instance being the smallest set needed to meet the ISO specs? (Mike, I guess I'm thinking of you, following our discussion at the weekly meeting earlier)


On 15/05/2014 16:48, "Glenn Adams" <glenn@skynav.com<mailto:glenn@skynav.com>> wrote:

i can see this subject is not going to be resolved easily as we clearly have a large gap about requirements; e.g., i think there are no requirements to signal content conformance, but only client processor requirements, i think we must use the TTML profile mechanism, etc

On Thursday, May 15, 2014, Michael Dolan <mdolan@newtbt.com<mailto:mdolan@newtbt.com>> wrote:
Maybe "highly undesirable", but if we don't address the A + B signaling
explicitly, then profiles need to be created for all the combinitorics of
namespaces in practice. Not the end of the world, but virtually prevents the
simple signaling of 3rd party namespaces already provided by the
namespace/schemaLocation mechanism today. No I am not proposing we use that
- I am pointing out a deficiency in this proposal that we already address
today in 14496.

Anyway, we need to go through the points in my email a week ago - if not
today, then on the 29th.

        Mike

-----Original Message-----
From: David Singer [mailto:singer@mac.com]
Sent: Thursday, May 15, 2014 5:20 AM
To: Glenn Adams
Cc: TTWG
Subject: Re: Draft TTML Codecs Registry

OK

Though it will be a sub-parameter of the codecs parameter for the MP4 file
type, from the point of view of TTML it's actually a profile short name
registry rather than codecs registry, so I think we should rename it.

the values here should be usable in both
a) the profiles parameter for the TTML mime type
b) the codecs parameter for the MP4 mime type

so, also "named codecs" -> "named profiles"



I agree with Cyril that we only need a single operator here (implement one
of these profiles and you're good to go), both because we don't need the
complexity, and because a "implement both/all of these" is effectively
inviting file authors to make up new profiles ("to process this document you
have to implement both A and B"), which is (IMHO) highly undesirable.



On May 15, 2014, at 9:55 , Glenn Adams <glenn@skynav.com<mailto:glenn@skynav.com>> wrote:

> See [1].
>
> [1] https://www.w3.org/wiki/TTML/CodecsRegistry


Dave Singer

singer@mac.com<mailto:singer@mac.com>



----------------------------


http://www.bbc.co.uk

This e-mail (and any attachments) is confidential and may contain personal views which are not the views of the BBC unless specifically stated.
If you have received it in error, please delete it from your system.
Do not use, copy or disclose the information in any way nor act in reliance on it and notify the sender immediately.
Please note that the BBC monitors e-mails sent or received.
Further communication will signify your consent to this.

---------------------



This message may contain confidential and/or privileged information. If you are not the intended recipient you must not use, copy, disclose or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation. Screen Subtitling Systems Ltd. Registered in England No. 2596832. Registered Office: The Old Rectory, Claydon Church Lane, Claydon, Ipswich, Suffolk, IP6 0EQ
  ­­


This message may contain confidential and/or privileged information. If you are not the intended recipient you must not use, copy, disclose or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation. Screen Subtitling Systems Ltd. Registered in England No. 2596832. Registered Office: The Old Rectory, Claydon Church Lane, Claydon, Ipswich, Suffolk, IP6 0EQ
  ­­

Received on Monday, 19 May 2014 12:37:04 UTC