- From: Felix Sasaki <fsasaki@w3.org>
- Date: Fri, 06 Feb 2009 12:27:45 +0900
- To: Yves Raimond <yves.raimond@bbc.co.uk>
- CC: Michael Hausenblas <michael.hausenblas@deri.org>, public-media-annotation@w3.org, public-media-fragment@w3.org
Hello Yves, many thanks for your description of FRBR. I am aware of its usage goals in the library and archives world, though I do not know actual implementations of it in the area of video. With implementations I mean applications which make use of e.g. the distinction of content and item which you described below, e.g. for search. Do you know any of these we could look at? Note also that have no consensus to take the related requirement into account http://www.w3.org/TR/2009/WD-media-annot-reqs-20090119/#req-r07 which, by the way, cites FRBR. For the use case of "Audiovisual archive as a Cultural Heritage Institution" http://www.w3.org/TR/2009/WD-media-annot-reqs-20090119/#uc-cultural-heritage-institutions This requirement is a crucial one, but a) we may not be able to implement this use case in the given amount of time - according to our charter we should be finished in about 10 months ..., and b) with several abtraction layers we face the challenge of loosing simplicity for people who want to implement simple applications as described at http://www.w3.org/2008/10/24-mediaann-minutes.html#item01 or as exemplified at http://www.w3.org/2008/Talks/video-capgemini/#(16) http://www.w3.org/2008/03/metadata_demo http://www.w3.org/2008/03/meta_functions.js Regards, Felix Yves Raimond さんは書きました: > Hello! > > On Thu, Feb 5, 2009 at 1:43 AM, Felix Sasaki <fsasaki@w3.org> wrote: > >> Hello Michael, >> >> many thanks for the follow up! Some replies below. >> >> Michael Hausenblas wrote: >> >>> Felix, >>> >>> Most of the stuff seems sorted, thanks. Remaining points inline: >>> >>> >>> >>>> I think that the paragraph >>>> "An important aspect of the above figure is that everything visualized >>>> above the API is left to applications, like: languages for simple or >>>> complex queries, analysis of user preferences (like "preferring movies >>>> with actor X and suitable for children"), or other mechanisms for >>>> accessing metadata. The ontology and the API provide merely a basic, >>>> simple means of interoperability for such applications." >>>> Tries to answer some of your questions. >>>> >>>> >>> Some, yes ;) >>> >>> Seriously, I *think* it would be good to have the ontology as the primary >>> model and derive the API from it (automagically?) if possible. I must admit >>> that I still didn't entirely grok how these two things play together. Assume >>> for a second that I'm a total noob - how'd you explain that in some simple >>> language? >>> >>> >> The ontology describes relations between properties in existing formats. >> As an example of the relation description see the table at >> http://dev.w3.org/cvsweb/~checkout~/2008/video/mediaann/mediaont-1.0/mapping_table_common.xls?rev=1.3&content-type=text/plain >> This table (and our working group) has put XMP into the focus, see >> leftmost column. That means that all other formats are related to XMP as >> much as possible. >> >> The ontology which we will produce may be >> 1) just this table, in a more readable and verified version, "verified" >> meaning: checking with users and implementers of the format and XMP >> specialists whether our description of the relations is appropriate >> 2) an ontology using a formal language like RDF, or an RDF-based >> vocabulary (like SKOS), an XML-based format with an addition formal >> semantics, some other language etc. >> >> Currently we have no consensus in the Working Group whether we should do >> only 1), only 2), both 1) and 2) and make 2) the normative version, both >> 1) and 2) and make 1) the normative version, etc. This is especially my >> fault ;) , since I have the use case of a client-side API, e.g. >> JavaScript in a browser, in mind, which implements the mappings between >> formats. Such an API can be built easily *by hand*, that is by reading >> the prose in the table from 1), but IMO it cannot be expected that such >> an API would process RDF or another formal language. Or to put it >> differently: we have different user communities with different usage >> scenarios for the ontology in mind, and it is hard to find a middle >> ground. The automatic derivation of the API sounds interesting in >> theory, but I have a hard to imagine it in practice. >> >> >>>>> + Regarding '6.7 Requirement r07: Introducing several abstraction levels in >>>>> the ontology' I'd say this is an absolute must. >>>>> >>>>> >>>> Do you have any existing implemention we could look at to be able to >>>> judge the efforts of this? >>>> >>>> >>> Well, yes, I guess so, see [1] and [2]; its from the audio domain and the >>> chap behind it, Yves Raimond, is lurking here around as well, so he may be >>> able to chip in ;) >>> >>> >> It would be great to get more information about this. I have a hard time >> to grasp the abstraction layers, and to understand how one can make use >> of them in [2]. Some explanation would be really helpful. >> > > Well, [2] is perhaps not the clearer reference about FRBR :-) > Anyway, to summarise briefly, when annotating media, you can't simply > consider the actual multimedia item, you need to consider more > abstraction layers than that. The first obvious layering is the > distinction between the content (the actual signal) and the item (the > CD on my shelf, the MP3 on my hard drive). You may want to describe > the content without having access to the actual item, or you may want > to describe the content once for many different items. These two > layers are absolutely necessary in a multimedia annotation context, > IMHO. > > FRBR (on which the Music Ontology is (loosely) based) goes a bit > beyond that. It defines four abstraction layers: Work (J. S. Bach's > Six suites for unaccompanied cello), > Expression (performance by Janos Starker in 1963), Manifestation > (recordings released on 33 1/3 rpm sound discs in 1965 by Mercury), > and Item (my FLAC rip of that disc). > > FRBR in a video context is a bit trickier, but the distinction between > content and item should at least *really* be there. > > Kind regards, > y > > >>> Mostly I'd recommend to focus on FRBR [3], but I guess the real expert is >>> actually Yves. Ah, I'll CC him and see what happens ... >>> >>> >>> >>>>> + the TOC is not well-formatted, although pubrule-checker [2] seems not to >>>>> complain - rather use use <ol> and <li> >>>>> >>>>> >>>>> >>>> mm ... I checked >>>> http://validator.w3.org/check?uri=http://www.w3.org/TR/2009/WD-media-annot-req >>>> s-20090119/ >>>> and did not see any problems. Could you point me to the markup part >>>> which you think has a problem? >>>> >>>> >>> Well, true. As I said. It's perfectly *valid*, it's about the markup you are >>> using (list rather <p> + <br/>) ... >>> >>> >> ah, now I understand. Many thanks for checking, we will fix that for the >> next publication. >> >> >>>> I did not see any comments on the requirements which I think are the >>>> most important "message" of the WD. Do you think these need a revision >>>> or are stable? >>>> >>>> >>> Seems pretty stable, beside my comments ;) >>> >>> >> Thanks. So let me phrase this as a question: except that you regard r07 >> as important ("several abstraction layers"), do you or somebody else >> from the Media Fragments Working Group think there are other >> requirements we should take into account? Could you reply with "no" on >> behalf of your Working Group? >> >> It would also be great to get feedback from you about our conversation: >> >> [ >> >>> If you can't talk about the >>> different abstraction layers, I guess the effort is pretty worthless. >>> >>> >> At the TPAC meeting in October we had a presentation from a video search >> engine with not more than *five*, "flat" properties, see >> http://www.w3.org/2008/10/24-mediaann-minutes.html#item01 >> I think we saw a metadata mapping which was very useful and worth it, so >> I would disagree with your statement above. >> ] >> >> that is, your feedback about the example of a simple approach I >> mentioned. Although you stated that without different abstraction layers >> you regard the effort as worthless, there seem to be even rather large >> applications which work without abstraction layers. >> >> Regards, >> >> Felix >> >> >>> Cheers, >>> Michael >>> >>> [1] >>> http://wiki.musicontology.com/index.php/Structural_annotations_of_%22Can%27t >>> _buy_me_love%22_by_the_Beatles >>> [2] http://dbtune.org/henry/ >>> [3] http://www.loc.gov/cds/downloads/FRBR.PDF >>> >>> >>> >> >>
Received on Friday, 6 February 2009 03:28:36 UTC