- From: AzamatAbdoullaev <abdoul@cytanet.com.cy>
- Date: Wed, 30 Sep 2009 14:27:33 +0300
- To: <semantic-web@w3c.org>
- Message-ID: <677B25A4BC934F6CBC9734AB47EE18EE@personalpc>
DB: "This means that, in general, semantic web developers must learn to deal with ontological mismatches." The statement is confounding and self-contradictory. As such, it is not "semantic web", but rather "meaningless web", where we have most succeeded :-). Azamat Abdoullaev http://standardontology.org ----- Original Message ----- From: Martin Hepp (UniBW) To: David Booth Cc: Aaron Rubinstein ; semantic-web@w3c.org Sent: Wednesday, September 30, 2009 12:34 PM Subject: Re: Vocabulary re-use Hi David: Excellent points. One thing we should observe, though, is that there is a strong lever if you design your initial patterns carefully and reuse existing ontologies. It is a bit more effort for the publisher, but it saves a lot of effort for the world (and thus increases the likelihood that your data will be used/considered). Also note that the level of detail and the precision of conceptual choices in the ontology you use finally limits the quality of any later mapping. If your proprietary ontology mixes apples and oranges (e.g. events vs. tickets for events, users vs. user roles, book titles vs. book copies, etc.), then it is impossible to use that data in contexts that require the distinction. Often, it is easy to make that distinction at the origin, because you still have a lot of contextual information. But once you don't have it in your published data, it is gone. Forever.... So as a general guideline, additional human intelligence when choosing the patterns for exposing your data pays out. Martin David Booth wrote: Aaron Rubinstein wrote: [ . . . ] The other part of my question is: does it matter? Can the Semantic Web support a plethora of similar but distinct vocabularies as long as applications are 'smart' enough to interpret the ontology and make inferences accordingly? The semantic web has no choice, because there *will* be a plethora of similar but distinct vocabularies. As Martin Hepp pointed out, this will happen because it is easier for the publishers of those vocabularies, even though it makes more work for the consumers. Furthermore, different applications have different needs: some will need finer distinctions than others. These finer distinctions may be essential to some applications, but they just add complexity to applications that don't need them. This means that, in general, semantic web developers must learn to deal with ontological mismatches. These questions arise, to a certain extent, out of what seems like a prevalent practice to convert existing encoding standards from certain domains that are described using XML Schemas into RDF using RDFS and OWL, without much awareness of existing ontologies that might suit the needs of the domain just as well. In a nutshell, is this OK or is it bad for the Semantic Web? If those XML schemas already exist then this sounds like a good first step to me. HOWEVER, the initial ontology you get from converting the XML schema is not likely to be the one you want to use, as it is likely to reflect too many artifacts of your XML schema. In my opinion, that ontology -- which I would call the "native ontology" -- should only be used to bridge between the XML and the domain ontology that you really want to use, which should be designed according to the needs of your domain. Rules can then transform from the native ontology to the domain ontology, so that XML instance data can be automatically transformed into the desired RDF (expressed in the domain ontology). The benefit of this approach over using XSLT to transform directly from XML to RDF is that the transformations can be defined at a more semantic level, so one is a bit more insulated from the idiosyncrasies of XML. Incidentally, this is one way that Gloze http://jena.hpl.hp.com/juc2006/proceedings/battle/paper.pdf is used. Gloze, given an XML Schema, will automatically transform XML instance data to RDF. It can also transform from RDF to XML instance data -- potentially using a *different* XML schema. Gloze is part of the jena suite of RDF tools. On the other hand, if you are already an XSLT wizard anyway, there's nothing wrong with transforming directly from XML to RDF expressed in your desired domain model. -- -------------------------------------------------------------- martin hepp e-business & web science research group universitaet der bundeswehr muenchen e-mail: mhepp@computer.org phone: +49-(0)89-6004-4217 fax: +49-(0)89-6004-4620 www: http://www.unibw.de/ebusiness/ (group) http://www.heppnetz.de/ (personal) skype: mfhepp twitter: mfhepp Check out GoodRelations for E-Commerce on the Web of Linked Data! ================================================================= Webcast: http://www.heppnetz.de/projects/goodrelations/webcast/ Recipe for Yahoo SearchMonkey: http://tr.im/rAbN Talk at the Semantic Technology Conference 2009: "Semantic Web-based E-Commerce: The GoodRelations Ontology" http://tinyurl.com/semtech-hepp Talk at Overview article on Semantic Universe: http://tinyurl.com/goodrelations-universe Project page: http://purl.org/goodrelations/ Resources for developers: http://www.ebusiness-unibw.org/wiki/GoodRelations Tutorial materials: CEC'09 2009 Tutorial: The Web of Data for E-Commerce: A Hands-on Introduction to the GoodRelations Ontology, RDFa, and Yahoo! SearchMonkey http://tr.im/grcec09
Received on Wednesday, 30 September 2009 11:28:19 UTC