W3C home > Mailing lists > Public > www-rdf-interest@w3.org > December 1999

Saying anything about anything: Comments on Harle & Fensel

From: Tim Berners-Lee <timbl@w3.org>
Date: Tue, 21 Dec 1999 12:50:44 -0500
Message-ID: <001301bf4bdb$e85f1650$e5061812@ridge.w3.org>
To: <frankh@cs.vu.nl>, <dieter.fensel@aifb.uni-karlsruhe.de>
Cc: <www-rdf-interest@w3.org>
Comments on Harle & Fensel "Practical KR for the Web",

The paper seems to illustrate a fundamental concept difference
between RDFS and object oriented systems which has been addressed before
but its worth going over again I guess to make sure we are in synch.

The authors start with a lament that "proposals from the AI community for
Web-based KR languages can hardly expect wide acceptance on the Web". They
seem to take this as an unfortunate but unalterable truth, without
the reason. Later, they may unwittingly provide one reason. They note that,

  "in RDFS, properties are defined globally and are not encapsulated as
attributes in class definitions.  Therefore, an anthology expressed in
Ontobroker can  only be expressed in RDFS by reifying the property names
class name suffixes. This is a rather disappointing feature which ignores
all of the lesson from object-oriented modeling in the past decade or

Yes, this is a fundamental difference between RDFS and many systems which
are object-oriented in the sense that the set of properties is defined with
respect to a class. Thus, when a car has a color, the concept of car and the
concept of color are independent first class objects.  This is in fact a
fundamental aspect of the semantic web. The rule is that (technically)
"anyone can say anything about anything" means that properties (property
names) must
be first class objects. If the designed of the class "car"  didn't think of
color as being an interesting property of cars, then it becomes more
difficult for anyone to make statements about cars being red.  This is a
fundamental difference between a system designed in a tree-like and web-like
way. A nested tree-like design with tree-like scoping has great properties
for top-down design. So much of computer science is based upon it, that
integrating these systems with weblike systems could be a challenge, but I
think they will operate inside each other quite well.   The main problem is
the conceptual hurdle of getting out of the way we are used to using

On the Semantic web, a type is a   set of constraints. You say something has
a type,
and that is an assertion which (if believed) allows people to deduce things
about it.   "Fred is a man" implies "Fred is a person". The person who
introduces the
concept of Fred often does it using a type to make a set of generic
assertions about Fred.
But you can't talk about "the class of Fred".  Other systems may not find
the "person" concept
useful, but note that in their ontology Fred is an employee and hence a
human, and so on.

Similarly, a class is not associated with variables.  You can't talk about
Fred "having"
an age which has been inherited because he is a human which is an animal
which has an age.
You can make some statements about "age" such as its uniqueness (if x has
age y and x has age z then y =  z). This allows you to use the phrase "the
age of Fred".  However, there is variable in the sense that Fred is invalid
unless his age is specified (as in a document type or form)  or that any
occurrence of Fred must be able to specify Fred's age (as in a public
variable of an object). There is nothing is the SWeb which talks about
reserving "slots" for variables.

It is conceptual differences like this which may have prevented AI and OO
stuff from spreading in a weblike way, and may make the technology take-up
by the web community difficult, which is where we came in.


in no official capacity

Other comments on the paper
((They follow with an analysis of ways of dealing with "semistructured"
information - I am not sure whether they mea partially well defined (i.e.
natural language and partly semantic) or weblike rather than tabular.
This is followed by a distinction between declarative machine-accessible
information and procedural information for extracting data which is not
machine-accessible.  This is a bit of a confusion, as of course both
procedural and declarative languages can be used in semantically
well-defined or indeed in fuzzy natural language situations.  But the
essential point is (for me) that when the filtering applied is declarative
filters are used, the rules themselves are part of the semantic web, and
open to analysis as any other semantic data.))

((There is a (2.1) a frightful suggestion to change the HTML META tag's
meaning so that the CONTENT attribute becomes a (local) URI for the content
rather than the content, in an attempt to address the "problem" of reuse of
same text in human and  machine-destined languages.  I currently think the
way to do this is to make the human-readable page a  function - e.g. in
of the semantic data))
Received on Tuesday, 21 December 1999 12:53:37 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:44:21 UTC