- From: Frank Carvalho <dko4342@vip.cybercity.dk>
- Date: Tue, 04 Oct 2011 23:38:37 +0200
- To: Michael F Uschold <uschold@gmail.com>
- Cc: semantic-web@w3.org
Hi Michael I will try to explaing the rationale behind this. First of all, when you build a SOA there are many things to be taken into account. You may be lucky enough to be able to pick your favourite modeling tool for the requirement specifications. In our case System Architect was already a household system when we started, so we stuck to that. The problem is that the requirement specification is going to consist of UML Class diagrams, Use Cases, service definitions, BPMN diagrams, even service description in MS Word, plus a bunch of other stuff. Furthermore, other modelling universes might be introduced. But all these areas are interconnected. The BPMN diagram may have an activity which is expanded into a Use Case. This Use Case may have sub-UCs and Use Case Steps, which again point to service definitions. Then, when we define services, we enter the XML WSDL and XSD world. When we develop and document the service we use javadoc, source code and more MS Word. Next problem is that the set of universes to include as metadata is invariably going to change over time. The next IT system may be built in .NET, and its documentation requires a different type of metadata. And anyway, each IT-house document their efforts in different ways. And then some bright head fancies rule bases, and introduces rules in the requirement specification on par with BPMN and Use Cases. And all these domains ARE interconnected, but are semantically very different areas. Each has its own semantic understanding of its domain, and very little notion of the semantics of its neighbours. At some point I started to examine formal OWL ontologies for UML Class diagrams, to see if it was useful. I found some articles describing a formal method of transforming Class diagrams into an OWL ontology. It was very nice, but it struck me that in order to do so, you simply had to make some tough decisions about the meaning of simple things such as inheritance, multiplicity etc. It is in the very nature of establishing formal semantics, that you rule out all alternative interpretations. But what if other interpretations are equally valid according to the original specifications? The problem in this area seemed to be, that the original UML Class diagram did really not have strict semantics at all, and a lot of things are left open to interpretation. This is a big problem, because UML diagrams are meant as tools for communication, and people will interpret these things in different ways. They will also be taught to use UML in different ways, and are not going to accept a strictly religious only true interpretation of the semantics. Scale this problem up to all the other areas of information, and we will not be done with the big master ontology until SOA is a thing of the distant past. So my conclusion is, that any general ontology that tries to encompass all these different types of information is doomed to fail. Instead I say, live and let live. Let each domain set its own rules of interpretation, and accept those. Focus should be on the connection points between semantic islands. Make sure that one subgraph shares objects/subjects with another subgraph, so that the interconnectivity is established, and then make queries in each area on its own terms. It is true that to some extent you loose the capability of very general queries that span several "islands", but you gain a practical flexibility in that you can easily plug in new areas of knowledge, and change the understanding of the ones you already have. I suppose that if you really insist on a general high level ontology, this could probably be superimposed upon the metabase, but I would consider that an added bonus. The purpose of our metadatabase is to give an "imprint" of those aspects of the sources that are important to us. And for us important means those things that interconnect, and where there will be an impact if you change something. Therefore all predicates bear a relationship to the sources from where they came, and are rooted in the logic of those sources. To call it "ontologies" may even be too strong a word, as I am really talking about sets of fixed predicates of local relevance. There is nothing wrong in using a full blown OWL spec for an area, but it is just not something we have done so far. Terminology is therefore strongly local, but that is not bad at all. A query that spans several domains will be expressed with predicates that combine the features of these different domains, and they really make sense when you read them. It is similar, but not exactly the same, as using ontologies as a database schema. "Database schema" kind of suggests a design-first principle - but in fact this is exactly the opposite. The database schema is open-ended and not fixed, and we may, at any time we like, backward engineer the current database meta-structure with a few simple queries. A sort of "Meta-structure of the day". I hope this makes sense and explains a little better what we are trying to do. Best /Frank > This is very intriguing, thanks for the update. I'm curious about the > apparent lack of use for 'fixed' ontologies. I get that in a highly > dynamic environment, having fixed metadata (whether as an ontology or > not) is a great hindrance. What I'm curious about that you do not > discuss, is what the role is for the ontologies you do have. One > obvoius possibility is as metadata for the data, i.e.ontology as data > schema. You say you have many ontologies, do you need to map them to > one another to do any kind of data integration/interoperability to get > over problems of different terminology? In that context a single (not > necessarily fixed) ontology can act as a lingua franca allowing the > multiple ontologies to cross-reference and translate among each other.
Received on Tuesday, 4 October 2011 21:39:02 UTC