W3C home > Mailing lists > Public > public-media-annotation@w3.org > February 2009

RE: What is needed to move forward

From: Joakim Söderberg <joakim.soderberg@ericsson.com>
Date: Tue, 10 Feb 2009 13:49:16 +0100
Message-ID: <4055256AED9D224D9442B19BF1C4C490033D01E1@esealmw118.eemea.ericsson.se>
To: "Felix Sasaki" <fsasaki@w3.org>, Raphaël Troncy <Raphael.Troncy@cwi.nl>
Cc: "Pierre-Antoine Champin" <pchampin@liris.cnrs.fr>, <public-media-annotation@w3.org>

Before going to the different communities, we could also try develop the relations (what they mean); i.e. equivalent, similar, sub-class-of etc.


-----Original Message-----
From: public-media-annotation-request@w3.org [mailto:public-media-annotation-request@w3.org] On Behalf Of Felix Sasaki
Sent: den 10 februari 2009 01:08
To: Raphaël Troncy
Cc: Pierre-Antoine Champin; public-media-annotation@w3.org
Subject: Re: What is needed to move forward

Raphaël Troncy wrote:
> Dear All,
> [snip]
>>> But I do think that, without such a formal framework for defining our
>>> ontology,
>>> we will fail to "help developers with the lack of syntactic and 
>>> semantic
>>> interoperability" (from the charter).
>> Here I fundamentally disagree. I think that deliverable from the
>> metadata working grop
>> http://www.metadataworkinggroup.org/pdf/mwg_guidance.pdf
> Hum, I think all this discussion, though very interesting, crystalized 
> on very polysemic terms that actually are not relevant. I meant formal 
> vs non formal, we can argue for hours what are their boundaries and it 
> will not solve our problem.
> Felix, there is nothing less or more formal in the work from the 
> metadata working group that what PA is proposing, so there is no need 
> to oppose these views. Thinking more about the SWCG telecon last 
> Friday, I think there are also some misunderstandings from what the SW 
> side of this group (if any) is proposing. Let me clarify ...
> It seems to me that the basic problems we have, is that we have a 
> great table containing mappings between various formats. What do these 
> mappings mean? Sometimes that property A is the same than property B. 
> Sometimes that A is a bit more specific than B. Sometimes that A has 
> nothing to do with B (disjoint). Sometimes that A is somehow related 
> to B but we cannot really further detail this relationship ... Is 
> there something else?

Yes. The main problem is that I think our main work item should be to go 
through these mappings, check with the communities who are responsible 
for the standards whether they are correct, and try to get consensus on 
the mappings. I do not want to be specific to SW or a different 
non-prose description - see req 11.

My concern with being specific to SW or a different non-prose 
description (may it be XML or whatever) is that it will make it hard to 
get access to some communities (e.g. web developers) who do not embrace 
that way of formal description. And I see a difference between the 
metadata WG deliverable and other approaches we are discussing: for the 
metadata WG deliverable the knowledge about the formal framework is in 
the deliverable itself - you do not need additional knowledge about the 
formal language used.

I have no problem at all if we, after having a carefully checked (with 
people inside and outside the WG) mapping table, to have a non-normative 
RDF based description. That is also what Harry Halpin proposed in the SW 
CG call, IIRC.

> Your prose description will say exactly that, right? PA and others 
> argue, I think, that all these sentences could be exactly encoded 
> using the RDF model (and it sublanguages) and could perfectly reflect 
> exactly the same level of fuzzyness in the meaning of the association 
> between A and B (e.g. skos:related is just a skos:semanticRelation !)
> All these statements could be made in the MAWG ontology. It will not 
> change of one iota the semantic commitment of any of all these 
> standards, but it will provide great interoperability. What is your 
> specific concern with this approach?

That I cannot estimate the effort for this without an exemplification, 
and do not want the WG to commit for this work item before the main work 
item (carefully checking the mapping with people inside / outside the 
WG) is done.

> Regarding Jean Pierre questions:
>> [
>> - what is the role of structured XML schemas (what do they do that e.g.
>> RDF/OWL doesn't or maybe less adequately like cardinality and type or ID
>> management)
> XML Schema defines types, that have as unique goal to be re-used. You 
> can re-use previously defined types in complete different context in 
> XML Schema, since they provide convenient bag of XML elements put 
> together. MPEG-7 uses this feature a lot. As a result, you have the 
> same type used in two very different meanings, resulting in 
> interoperability problems at the processing level.
> OWL/RDF defines the semantic meaning of the objects. You will not use 
> the same concept simply because they happen to have similar properties.
>> - if I go for an RDF/OWL model, what is its role? How different is it 
>> from a
>> dumb RDF description (why is it NOT rdfising existing XML schemas? What
>> makes its value and who will integrate it to exploit metadata instances
>> (what would search engines do with it)?
> You can rdfising XML Schemas. But doing so, you will necessary based 
> this transformation on an implicit OWL/RDFS/SKOS model. So why not 
> having it explicit?
>> - How do I generate instances? From where (transformation from 
>> structured
>> metadata instances or from a database)? What are the tools to 
>> generate valid
>> templates from complex models?
> There are plenty. The ESW wiki is a good start.
> The triplify method, conference series, tools and tutorials are also 
> valuable resource.
> W3C might launch a RDB2RDF WG after the XG of the same name.
>> Which conclude with the statement "Quite a lot of work to be done", for
>> the formalization. I very much doubt that it is worth the effort, if
>> nobody comes up with a simple example to demonstrate its value.
> I don't see the effort ... Write the prose, it can be translated in 
> RDF in a few min.

Could you or somebody else do that for the mapping table? I had asked 
several times for even toy examples of that, but it seems they are hard 
to produce ...

> I will rather reverse the problem. If the RDF cannot be written in a 
> few minutes, it means that there is serious misinterpretation possible 
> from your prose description. 

It can also mean that there are parts of the prose description which are 
hard to put into RDF. Let me give you examples from the metadata WG 
1) How would you formalize the figure 5 on p. 23 (Reconciling properties 
between EXIF and XMP)? Or the read guidance for IPTC-IIM, Fig. 6 on p.25?
2) How would you formalize the "Text Encoding" considerations (only the 
"read" part), p. 30?
3) How would you implement requirements for reading metadata on time 
zone handling like p. 31
"A Consumer MUST NOT arbitrarily add a time zone. E.g. when importing Exif
DateTimeOriginal to XMP (xmp:CreateDate), use a zone-less form for the
corresponding XMP value."
4) How would you implement a requirement on keywords like
"IPTC Keywords is mapped to XMP (dc:subject); IPTC Keywords can be
repeated, each mapping to one of the elements in the XMP (dc:subject) 
5) How would you implement a requirement which relies on differences of 
values like p. 33
"When both Exif ImageDescription and
UserComment are available and differ from the XMP"

The simplest answer is of course "These are all low level API questions 
and out of scope for us". But I think except 2) we will loose a lot of 
value in our work if we do not consider such questions.

> My conclusion will be that you will not have solve the 
> interoperability problem, but perhaps making it worse, adding another 
> vocabulary ...

My experience in standardization may not be long enough ... but my 
impression is that the means or "framework" (prose, formal language a, 
b, c ...) is not important for the quality of modeling a domain. I do 
understand that the means is important for using the outcome of the 
modeling in a certain community, e.g. SW. Nevertheless whether we will 
be successful or not will not be decided by a magic tool from whatever 
technology, but by the quality of our analysis and by the effort for 
outreach to all communities, whether they embrace SW or not.

>> I would rather like to spend time on what Jean-Pierre proposed:
>> "I would therefore suggest that we dedicate part of the next physical
>> meeting
>> going through these issues asking everyone of us to come with a
>> presentation. "
> +1

Cool! If I understand your +1 correctly we have agreement that the next 
steps (from Jean-Pierre's mail) are
- I have the equivalent with the EBUCOre (close to the PBCore), which is a
sort of refinement of DC that our XMP matrix proposes.
- The IPTC is doing exactly the same work on XMP (less surprisingly)
- The matrix itself and how mappings have been made would be worth going
through term by term to check its validity.
I would therefore suggest that we dedicate part of the next physical meeting
going through these issues asking everyone of us to come with a

After these are done I am more than happy to go back and discuss "should 
we use prose, formalization a,b,c, ".


> Best regards.
>   Raphaël
Received on Tuesday, 10 February 2009 12:50:32 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:17:33 UTC