- From: Uschold, Michael F <michael.f.uschold@boeing.com>
- Date: Sat, 26 Feb 2005 04:32:12 -0800
- To: <public-swbp-wg@w3.org>
Overall: Thanks to Phil for getting something out the door for all of us to look at. There are plenty of interesting ideas and points, which IMHO need to be brought into much clearer focus. When I read it, I hardly recognized it from the prior notes and discussions. Something seemed missing. The note says little if anything about 'ontology-driven architectures'. Perhaps we can agree on such things as: * intended audience for this note * what are the specific objectives for a given reader? * what specific points should the reader go away with? * can we identify some specific needs, requirements, potential benefits that persons in the intended audience will have? * can we structure the note according to how to meet these requirements, make the benefits happen? * an outline of the note with a sentence or two describing what we will say in each major section. * are we going to give some specific advice? * are we going to illustrate any general points with examples? * how does the note relate to OWL, specifically? Once we get agreement on these things, writing the note should be much easier. As it stands, the note is very generic, abstract, and high level, it lacks any particular relevance to OWL. Also, there are lots of very long sentences that could be split up to add clarity. Here are some specific comments: Frequently, it is mentioned that formal specifications offer the possibilty of complete lack of ambiguity. IMHO, this is very misleading, and a common misunderstanding. We should not promulgate it. Here is a paragraph I recently wrote in another context addrssing this: It is often blithely assumed that representing the semantics explicitly and formally, removes all ambiguity in meaning, thus enabling machines to be programmed to automatically discover the meaning and behave appropriately. Such claims are very misleading, or just plain false. For example, terms defined in a logic-based representation language with a model-theoretic semantics are highly ambiguous (in that there are many possible models). Adding more axioms can rule out more and more models, however for many concepts, such as 'human being' or 'car' there are fuzzy boundaries. It will never be possible to add enough axioms to stamp out all ambiguity. Even if you could, it is not likely to be useful to have many dozens or hundreds of axioms for each concept - how would they be used? What we can say, it that formal representations HELP to remove ambiguity. INTRODUCTION: No clear objectives, disjointed. BACKGROUND 2.1 This was jarring, there needs to be a gentler introduction to the overall content of the note, and outline. Then each piece will naturally follow on, and the reader will expect them. Perhaps we can use the "say what you are going to say; say it; say what you said" structure for the note. 2.2: by 'tooling use" do you mean use of tools? 3: Need introduction spiel to talk eabout the three ideas in general, then elaborate. Organize the benefits. For example, Ease of formal specification supports rigorous classification and identification. It also supports knowledge assertins and inferencing. if we number the bullets from 1-6, the following links exist. 4 -> 2 -> 5 2 -> 1 4 -> 3 -> 1 Where the semantic of the linke is: supports, facilitates, helps bring about 3.1: too vague, need examples. 3.3: too wordy, abstract and jargony. What is the specific importance/relevance of the relational model in this context? Need examples 4. what is the intended message for this section? 5. Isseus: these are the standard ones. Do we have anything specific to say about them for our particular purposes?
Received on Saturday, 26 February 2005 12:32:46 UTC