W3C home > Mailing lists > Public > www-rdf-comments@w3.org > April to June 2002

Re: FAQ: stratified class hierarchies vs. RDFS

From: Tim Berners-Lee <timbl@w3.org>
Date: Thu, 27 Jun 2002 01:02:22 -0400
Cc: Dan Connolly <connolly@w3.org>, www-rdf-comments@w3.org, "R.V. Guha" <guha@guha.com>
To: pat hayes <phayes@mail.coginst.uwf.edu>
Message-Id: <11277EFE-898B-11D6-9F17-000393914268@w3.org>

On Tuesday, June 25, 2002, at 02:20 AM, pat hayes wrote:

>> Guha,
>>
>> Responding completely intuitively without a set of proofs to match,
>>
>> It seems that the fundamental difference between classes and sets
>
> Please, there is no fundamental difference between classes and sets. 
> This is why I reacted to Guha's message. This isn't the right way to 
> talk (or think), because it doesn't make sense . Classes and sets 
> aren't like cows and sheep. Classes *are* sets (well, class extensions 
> are sets, to be exact); the difference is just that not all sets are 
> classes. Its more like cows and Jersey cows.
>

Yes exactly... though I understood that it was that not all classes are 
sets. Sets are well-behaved things, and classes not?

>> is that in set theory (as PPS said in his recent tower of babel paper) 
>> that one expects of course a well defined membership function for any 
>> set.  For any object and any set, the object is or is not in the set.
>
> Right. Similarly for whatever Guha is calling classes, unless he (or 
> somebody) is going to do some remarkable new work in the foundations of 
> meaning. For example, something either is or is not a rock, say, or a 
> cow, or a herbaceous border. (Which is not to say that there isn't room 
> for disagreement in any of these cases, of course, even about what the 
> category names mean. Its just a way of saying that once you have 
> decided what the category names *do* mean, and what the facts *actually 
> are*, that you can express that mutual understanding in terms of what 
> things you count as being in what collections of things.)
>
>> This is a technique we can use so long as we restrict ourselves to 
>> talking about people.
>
> ??It has nothing to do with people. Members of sets can be absolutely 
> anything.

Sorry, when I said  "people", I meant what you meant when you said 
"rock, say, or a cow, or a herbaceous borders".

(But not classes)

>> We have avoid the self-reference.
>
> ??avoided what self-reference? There isn't any self-reference problem 
> here. The issue that started this thread was about a set containing 
> itself as a member, not about something referring to itself.
>

It seems there is a strong analogy between the Russel paradox for sets 
and the liar paradox with self-reference.

When you say,

"Consider the set of all sets which [are not members of themselves]"

You could be read as saying,

"Consider the set of x such the following statements are true:
   [ x is not a member of itself]".

I put the [brackets] around the last bits you don't need to consider the 
paradox case to see the analogy.    Assuming that classes have 
well-defined membership functions is the same as assuming that each of a 
parameterized set of statements is true or false.

This is where the paradox breaks down, when you do not assume that the 
class is well defined, or that every sentence must be true or not true.

>> In a web-like context, self-reference comes up all the time, as a 
>> direct result of anything being able to refer to anything.
>
> Well, this is another discussion entirely, but I fail to see why 
> anything IS able to refer to anything, on the web or off it. Most 
> things do not refer at all, in fact.

Sigh. We are just not communicating here.

Any tme you use a URI twice in different places, a reference can be 
thought of as being made. When you use an HTTP URI then you are using 
one of an unbounded set of URIs which are mentioned in the specification 
of HTTP, which you are required to agree to when you play on the 
semantic web. A lot of those things are documents.  I know you have a 
problem with any real-world meaning being carried by the
predicates. But that is how the layering is done.  But that was a 
different thread.  If you don't accept the use of a URI as a reference 
to that which it identifies, then discussion of layering quite out of 
the question -- we would still not have a basic philosophy of 
specifications nailed.

>> (so for example any formula being able to cite any other formula or 
>> itself).
>
> ?? How did that happen, again? Any formula OF WHAT can cite any other?  
> And even if you are right, and all formulae can cite all other formulae 
> (I really have no idea what that means, but never mind) that only gives 
> us what might be called universal powers of citation; it doesn't 
> provide anything like universal powers of *reference*.

(If a formula is expressed in RDF/XML and on the web, then you can
create an Xpath expression for it.
You can define an RDF property which relates an xpath expression
to the formula it parses to within the context of the document it came 
from.
So you can construct a reference to that formula.  People have done it 
and
will do it.)

>> We can't say that every formula is or isn't true.
>
> Yet another discussion entirely. Suppose for the moment that any 
> formula can refer to any other. It doesn't follow that formulae don't 
> have a truth-value. (If any formula can assert the truth or falsity of 
> any formula, then indeed one can reconstruct the liar paradox, by 
> writing a formula which asserts of itself that it is false. That still 
> doesn't imply that formulae don't have truth-values, though it could be 
> taken to be prima facia evidence for that conclusion. But the central 
> problem there is being able to asserting that something is false, not 
> the act of reference itself.

I am happy about that.  I am happy with saying something is true.  I 
just haven't yet figured out what it means to say something is false.  
(Except for a wide class of statements which have converses, such as  
"a>b". )

> Truth-predicates are indeed dangerous; the moral is not to confuse 
> truth-values with truth-predicates.)(See PS below)
>
>> So we can't use set theory for the semantic web.
>
> And even if formulae didn't always have a truth-value - which, to 
> repeat, fails to follow from the above line of reasoning about five 
> times over -  that in turn wouldn't have any consequences for set 
> theory. It might have some influence over *which* set theory was 
> suitable, but it wouldn't rule out set theory as such. Its a bit 
> difficult to even know what that would mean: its a bit like saying that 
> we can't use language. Set theory in one form or another is about as 
> fundamental an idea as possible; it underlies all of language and all 
> of mathematics. Its just the idea of being able to talk about 
> 'collections' in some very generic sense.
>

Ok, so you are saying one can use model theory but avoid assuming that 
for example every class has a complement.

>> [..]
>>  Nevertheless, people have tried it and guess what -- run up against 
>> paradox.
>
> The only paradoxes that have arisen so far in a web context have been a 
> direct consequence of damn silly language engineering decisions based, 
> as far as I can see, on ignorance and incompetence. Layering, for 
> example, raises no paradoxes as such. Trying to do semantic and 
> syntactic layering in the same notation does, but anyone who knows a 
> modicum of logic could have predicted this, and also what to do about 
> it.

What you are saying is clearly (if you don't mind me adopting your tone 
to set a balance) complete nonsense and obviously so.  There are so many 
examples of layering using the same syntax which actually are crucial to 
the engineering you are using to reply to this note.
The C language, and the C language with run-time library. Same syntax, 
extra semantics, extra power.

Have you never come across two logics with the same syntax but where one 
has a subset of the other's axioms? Take any logic, subtract one axiom, 
and you have another logic which can be regarded as a layer underneath 
it. So, by example, layering is possible, even though those who know a 
modicum of logic might have predicted that it were not.

It is, in a more general way, layering which allows IP to run over 
ethernet, TCP to run over IP, and HTTP to run over TCP.  Does the 
introduction of the TCP spec make the IP spec invalid?  It sure 
increases the number of inferences you can make about a packet.  but it 
adds axioms, not deletes them.

So while some may try to blame the concept of layering for the presence 
of a paradox problem in OWL, I would look elsewhere. Specifically, I 
would look to the logic of the language, which seems to have acquired  a 
"not"  in various forms (such as cardinality and complement) in 
daml+oil.   There is a built-in axiom that the class exists which is has 
itself as a member and has no members. You say this is a product of the 
syntax used. Well, what you are saying must make sense in some form.

It seems that daml+oil should be stripped of these, until they can be 
introduced in a guarded fashion.  This leaves us with a positive logic 
except in certain specific cases, where certain predicates have 
converses, and classes have complements within a specific larger set.

(I understand for hallway conversations that such forms of logic are 
rather fringe for the logic community, and may be considered to 
researchy, That would suggest one should split the simplified positive 
logic from the rest at this point, so we can get on with engineering 
things which don't need the "not".)

(It is possible that the web-like constraints are in fact new ones in 
many ways.  here we have all these machines which will go and read 
documents and absorb new knowledge, new axioms, at every stage. Maybe we 
should have said in the first place that "RDF is a framework for an 
infinite set of logics, where for every identifier which can be used as 
a predicate there exists a set of axioms, and subset of which may be 
known by a given agent, and of which typically a subset of which are 
expressed in RDF".)  Maybe the whole starting of the webont group was 
just ill-specified.

>  Several of us did predict it and tried to do something about it, but 
> were overruled. Do not attribute these elementary problems of bad 
> design to set theory.

So you would call it bad design, I assume.

> Webont is like a group of auto designers who have been told to design a 
> car which is both a two-stroke and a four-stroke. After shaking their 
> heads for a while they have got down to work, and have been arguing 
> about whether to put in two engines, or whether its best to put in an 
> oil filter to prevent the 2-stroke oil from clogging the fuel injection 
> system, or maybe to have two separate fuel systems. Its not easy to 
> decide what is the best or most rational way to proceed. But one 
> shouldn't conclude that there is something fundamentally wrong with 
> auto engineering; the real problem is that the people who wrote the 
> specs didn't really understand what 'stroke' meant. (Under these 
> circumstances, it is particularly irritating to be lectured by the 
> management on the need to be thinking outside the box, and why we need 
> to re-design the suspensions and do away with the steering wheel. )

It seems there must be a lot of communication problems still.

To me it looks as though the group was asked to make a car without and 
engine, and then to make an engine, which would fit in the car.  "Oh 
no!" you cry, you asked us to design a car without an engine!  We 
couldn't make an engine for it or it just  WOULDN"T BE A CAR WITHOUT AN 
ENGINE any more! Can't you see???!!!"    And here now the "engineers" 
are coming to the "management" and telling them how stupid they are ...

>> Remove the assumption that every class has a compliment,
>
> There is no such assumption in set theory. Complementation isn't even 
> mentioned in the ZF set theory axioms.

We have to remove it from DAML.  The PPS problem relies on (among other 
things) the assumption that a class exists for every combination of 
restrictions.

>>  leaving the fact that a B is a compliment of A being a statement, not 
>> an axiom.
>
> It isn't an axiom.
>

Good.  But the PPS "paradox" isn't a pardox unless each line of his 
ntriples can be derived from, or is, an axiom.

>> [...]

>> seems to need real changes. The "not" has to go. That means 
>> cardinality and complement have to go. Maybe more stuff.
>
> This is dangerous nonsense. Sorry to be so blunt, but you do not seem 
> to know what you are talking about here, not what the consequences of 
> what you are saying really are. If cardinality has to go, then 
> arithmetic goes with it. Even on the web, it is probably going to be 
> quite useful to know that 1+1=2.

Do you think you could create a system in which 1+1 was two, and I could 
say that your remarks were true (assuming we end up agreeing) and the 
system will not fall over by the PPS problem?

> Getting rid of complementation in a set theory is not the same as 
> getting rid of propositional negation.

The analogy is the law of the excluded middle.  Classically,  if p or 
not p is an axiom, we have trouble.  This is very similar to the problem 
in which anything must either be in a class of not be in a class.  They 
may be quite different, but they play the same role in the construction 
of the paradox.

>  Consider the assertion that I am not a dog. All that says is that it 
> is false to say that I am a dog. The corresponding complementation 
> asserts that a class of all non-dogs exists, and that I am a member of 
> it. The difference between them is that assertion of existence of the 
> class of non-dogs. That is indeed a very odd claim to make: what are 
> the boundaries of non-doggishness? Are black holes non-doggish? Are 
> unicorns non-doggish? The only way to tell seems to be to ask the 
> equivalent questions about being a dog, and then use classical negation 
> to swap the answers. But the bare use of classical (or any other) 
> negation makes no claims about the existence of anything, class or 
> otherwise: it just says that some proposition isn't the case.
>

Isn't the classs of non-dogs defined as those x for which "x is a dog" 
is not true? Classically, it must be either true or not true for any x. 
The Class has a complement. A black hole is not a dog. A unicorn is not 
a dog.  I don't know that "dog" was  a good example.

> In any case, languages without classical negation have been 
> investigated thoroughly, and they have no useful properties in this 
> context. They do not avoid the old paradoxes (if you are still worried 
> about them; I would suggest it would be more useful to just stopping 
> worrying (**)) and they do not provide any useful increase in 
> expressiveness.
>
> There might be a case for actually *increasing* the expressive power of 
> the logic by introducing a modal necessity operator. That would then 
> enable you to take advantage of a well-known mapping of intuitionistic 
> negation into the modal logic S4, in effect by treating not-P as 
> meaning necessarily-not-P. Intuitionistic negation indeed does not 
> satisfy the excluded middle axiom, ie there are P's for which (P or 
> (I-not-P)) does not hold.

So I won't understand how to build this thing without studying modal 
logic, eh?

>> One can, of course, build up set theory in restricted areas.  For a 
>> huge number of situations, there are limitations which allow it to 
>> work.  This I think of as being another form of closed world 
>> assumption.
>
> I think what you mean is that there are cases where one can identify 
> classical and intuitionist negation. Basically, intuitionist negation 
> is rather like taking not-P to mean not that P is false, but that it 
> can be positively *shown* to be false, using the same techniques that 
> are used to show that something is true. (Those techniques have to not 
> themselves use the law of excluded middle, of course; this can all be 
> axiomatized in various ways.)  The problem that I have with any such 
> suggestion, however, is that it seems to quite reasonable to want to be 
> able to state simple negative facts, such as that I have no money, or 
> that my father's name was not Petrovich.

What do you mean, in the web context, by the fact that his name is not 
Petrovitch?  Your father, to whom I refer here as Petrovitch, may have 
many names.  We can carefully define your meaning of "name" here so that 
your points stands.  We can talk about a specfic name given to 
Petrovitch when his birth was registered. In fact, we mean that the 
registration document does not say that his name was "Petrovitch". Now 
that is a fact we can get from parsing it.  That predicate, the 
inclusion of a phrase in a birth registration, has a converse.  Like 
many things which you want to be able to talk about -- that you didn't 
buy an orange yesterday, and so on.  A whole world in which classical 
logic works well. But it does exclude formulae about formulae. So we 
should make use of this logic very natural, for these cases, but not be 
able to use it on abstract things which will get us into trouble.

>  In making such negative assertions, I am not claiming to be able to 
> *prove* them mathematically: I'm just making a claim about simple 
> facts, about the way the world is.

(If I said that to you I would not get away with it!  That's all RDF 
is ... simple claims about how the world is ;-) )

Your simple facts about "the way the world is" work classically because 
they boil down to physical measurement of a thing, person, and so on.  
These measurement predicates  have converses.  The moment you start 
getting abstract it fails to be so obvious. "I am a human" is easy. "I 
am an optimist" is getting outside the range.  But as a lot of this 
stuff will be simply data about organges (people, rocks, etc) the 
predicates will work with classical axioms. The metadata about the terms 
"oranges" and so on will not, because it is talking at an abstract 
level.  So schemas will have to do without classical logic to avoid PPS 
paradoxes. But that is OK - you don't need to ask whether "orange" has 
any money.

>  That is classical negation, pure and simple. What is wrong with being 
> able to do that? A lot of what we know about the world is based on such 
> negative knowledge, and it is is damn useful stuff to have and to be 
> able to use. In particular, it is very useful to be able to conclude Q 
> from (P or Q) and not-P, a point made by many people from Aristotle to 
> Sherlock Holmes.

Aye, and puzzled as many people with the liar paradox.

If I has started so naively you would have shot me down and told me 
things were not so simple, my lad, or words to that effect.


>>  Suppose we subclass classes to classes of individuals which cannot be 
>> classes or properties.
>
> That would be what are known as ur-elements, or individuals in a 
> traditional first-order logic. OK, suppose we do....
>
>> and we subclass properties to those which are not properties of 
>> properties.
>
> Traditional first-order properties. OK, suppose we do. We are now in a 
> traditional first-order stratified logic. Many people would feel more 
> comfortable talking only in this way, but there is no real advantage, 
> since the more liberal framework can be mapped into this one.
>
>> Then, while we can't write the axioms for DAML,
>
> Well, actually, you can. DAML *is* stratified in this way. What you 
> can't write is the axioms for RDF(S).
>
>> we can write a lot about people and telephone numbers and orders of 
>> mild steel.  So within those restricted environments, the folks with 
>> set theory like first order systems can go use them.  These are the 
>> stratified systems, and their kin.
>
> Right (although they aren't really restricted 'environments', more 
> restricted modes of expression) ...

Different ways of looking at it.

>> But the general semantic web logic has to be more general,
>
> .....can you say in what way? If we are more generous and allow classes 
> to contain classes and properties, and properties to apply to 
> properties, what then? That is the state that RDF(S) is in right now, 
> with a semantics (model theory) modelled on the Common Logic (nee KIF) 
> model theory.....
>

Which is broken, according to Peter.

>> and so cannot have 'not'.
>
> Sure we can. CL has 'not', and also has about as unconstrained a syntax 
> as you could possibly want. You can apply anything to anything, as many 
> times as you want. Classes (unary relations in CL) can contain (apply 
> to) themselves, etc. etc. . It has classical negation, and full 
> quantifiers, and even quantification over sequences, can describe its 
> own syntax, etc.; and still it is a classical logic, and still it is 
> consistent and paradox-free. This is OLD NEWS. Wake up and smell the 
> coffee.
>
>> DanC and i have been getting on quite well in practice using 
>> log:notIncludes, something which checks whether a given formula 
>> contains a given statement (or formula).  It is a form of not which is 
>> very clean, formulae being finite things.
>
> Sure, weak negations have their uses, as do strong negations. However, 
> that doesn't enable me to just say simple negative facts like "I don't 
> have any money" and have people draw the right conclusions.
>
> [..]
>> In fact there are a lot of things (like log:includes and 
>> log:notIncludes) which are converses.The arithmetic operators like 
>> greaterThan, for example.  Things typically defined with domains and 
>> ranges which have some sort of finiteness.  So to a certain extent one 
>> can do notty things with such statements.
>
> Right, as long as you can assume that things are finite, you can do a 
> *lot* of things that you can't do without that assumption. That's the 
> recursion theorem, in a nutshell. However it seems to me that the Web 
> is one place where that kind of assumption - what might be called a 
> recursively-closed-world assumption  - *cannot* be expected to hold, in 
> general. Of course, when it can, then we ought to be able to cash in on 
> it, as it were. But we can't expect to build this into the basic SW 
> architecture.

I'm not talking about making the whole world classical, just making 
statements in  a given schema for a given ontology which allow classical 
axioms to apply to it.

Just because the web is large and web-like, that doesn't mean that a 
document on it, when it wants to say that you don't have money, isn't 
using restricted expressive power (if you like) or talking within a 
range of concrete things like you and your pockets and coins, and you 
can't use the law of the excluded middle in that context.

You either have money or you don't.  Why? Because there is an algorithm 
for determining the question which terminates with a binary answer.   Of 
course, if we could be sneaky, I could  give you ten dollars and say 
that you should consider it yours if and only if you have no money.   Do 
you have money or not?  (assuming you stared with none).  Oops. We 
introduced a recursive function of ownership which complicated the 
algorithm and gave
us a paradox again.  And just when it seemed we were talking about

[...]
>>
>> Tim
>>
>> PS: Union being something only with sets makes sense, as union axioms 
>> have to have a not in them.
>
> ?? Can you run that past us again? Why does union (= logical 'or') 
> necessarily involve negation?
>
> Pat
>
> (**) PS . Here's why you should stop worrying about the classical 
> Russell-type paradoxes. Briefly, they don't arise on the Web, if we 
> interpret things properly.
>
> Suppose for the moment that you are right about universal 'citation', 
> in the sense that any document can point to any other document and 
> endorse or deny it. Then there is no way to prevent the following 
> situation arising: two documents A and B may exist, where A points to B 
> and endorses it, and B points to A and denies it. (Longer such chains 
> can be constructed, obviously, but they all boil down to this.) Agreed 
> that this can happen: but is this a paradoxical situation? The answer 
> to that depends on how you interpret the endorsing and denying.
>
> If we say that this mutual endorsement and denial is done essentially 
> in a metatheory, by referring to the other document and asserting of it 
> that it is true or false, then indeed this situation amounts to a 
> reconstruction of the liar paradox, and that is genuinely paradoxical 
> if we interpret those assertions of truth and falsity in their usual 
> sense, which is hard to avoid if we are indeed claiming to be using the 
> truth predicate, which is kind of required, by definition, to be 
> tightly connected to the notion of truth that is used in the metatheory 
> of the language itself.

You assume a truth predicate with a law of the excluded middle, and with 
no truth predicate you assume no such law.

I would just allow you to use a truth predicate  where  true(p)  <-> p .
You can do that much without problem.  It is when you have that p must 
be true or false you get a problem.  It is not the predicate itself - 
its what you assume must come with it.

> However, that is not the usual way of interpreting endorsement or 
> denial, nor the most convenient or natural way. Suppose we use a more 
> natural reading, in which to endorse something is to simply assert it, 
> and to deny it is to endorse its (classical) negation.  One way to 
> think of this is that A's pointing to B amounts to A's 'importing' the 
> content of B into itself, and B's denial of A is an importing of not-A 
> into itself, ie the content of A, but inside a negation. (Another way 
> is to think of it the way that Donald Davidson in his essay "On Saying 
> That" suggested we should think of indirect reference, as a kind of 
> demonstrative, where A says, pointing to B, "I agree with that" and B 
> says, pointing to A, "I deny that". ) Then the situation described is 
> one where the content of A includes B and the content of B includes 
> not-A, ie the content of them taken together can be summed up by the 
> propositional expression  ( (A => B) and (B => notA) ).  Notice that 
> there are no truth-predicates involved in this, and no expressions are 
> mentioned in a metatheory: they are simply used with their ordinary 
> meaning, using the ordinary assumptions of the language.  This is now 
> logically equivalent to A asserting P and B asserting not-P for some P; 
> they simply disagree, is all; so that taken together, the two 
> assertions amount to a contradiction. This is a sign of disharmony - 
> they can't both be right - but it is not even remotely paradoxical. The 
> only way it differs from a simple P-vs-not-P contradiction is that it 
> takes one or two extra inference steps to uncover. (Actually, depending 
> on what else is or isn't in the documents, you could come to the 
> conclusion that it is A that is making the contradictory claim here, 
> and B is just agreeing. But in any case, there is clearly nothing 
> paradoxical involved; and there is no way to prevent people from 
> publishing contradictions in any case, if they can use negation.)
>
> The paradox is averted precisely by avoiding the apparently innocuous, 
> but in fact very dangerous, step of using a truth predicate to make a 
> simple assertion. Asserting P is not the same as asserting true('P'), 
> rather in the same way that a popgun isn't the same as a Uzi. But there 
> is no need to make this move into the meta-language in order to simply 
> assent or deny something that is already in a form which admits 
> assenting or denial; and no point in doing so, since one has to 
> immediately get back from the meta-level once again in any case. If you 
> avoid this dangerous and unnecessary maneuver, there is no need to be 
> worried about the liar paradox and no need to feel uncomfortable with 
> classical negation.

You can still say that A says false(B) and  B says true(A) and consider 
that we just have disharmony.  It wasn't the truth predicate, it was the 
law of the excluded  which you slipped in, surely? You can take Peter's 
paradoxixal 7 statements in RDF and say they are not consistent. There 
is no paradox.  That is what I originally thought when I looked at it.   
It is only when you indicate that there is some axiom set in OWL which 
allows you to generate them from nothing that you have a problem.

(I agree that using two documents didn't change the problem in that way,
Using two documents prevents people form going down the path of first 
preventing self-reference, and then imposing some stratification regime 
to prevent the loop, which regime then shows up as impossible to impose 
on the web. When you look at two documents, it is obviously now web-like 
to try to stratify it to prevent it being cyclic)

tim
Received on Thursday, 27 June 2002 17:15:30 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 21 September 2012 14:16:30 GMT