- From: pat hayes <phayes@mail.coginst.uwf.edu>
- Date: Tue, 25 Jun 2002 01:20:51 -0500
- To: Tim Berners-Lee <timbl@w3.org>
- Cc: Dan Connolly <connolly@w3.org>, www-rdf-comments@w3.org, "R.V. Guha" <guha@guha.com>
>Guha,
>
>Responding completely intuitively without a set of proofs to match,
>
>It seems that the fundamental difference between classes and sets
Please, there is no fundamental difference between classes and sets.
This is why I reacted to Guha's message. This isn't the right way to
talk (or think), because it doesn't make sense . Classes and sets
aren't like cows and sheep. Classes *are* sets (well, class
extensions are sets, to be exact); the difference is just that not
all sets are classes. Its more like cows and Jersey cows.
>is that in set theory (as PPS said in his recent tower of babel
>paper) that one expects of course a well defined membership function
>for any set. For any object and any set, the object is or is not in
>the set.
Right. Similarly for whatever Guha is calling classes, unless he (or
somebody) is going to do some remarkable new work in the foundations
of meaning. For example, something either is or is not a rock, say,
or a cow, or a herbaceous border. (Which is not to say that there
isn't room for disagreement in any of these cases, of course, even
about what the category names mean. Its just a way of saying that
once you have decided what the category names *do* mean, and what the
facts *actually are*, that you can express that mutual understanding
in terms of what things you count as being in what collections of
things.)
>This is a technique we can use so long as we restrict ourselves to
>talking about people.
??It has nothing to do with people. Members of sets can be absolutely anything.
>We have avoid the self-reference.
??avoided what self-reference? There isn't any self-reference problem
here. The issue that started this thread was about a set containing
itself as a member, not about something referring to itself.
>In a web-like context, self-reference comes up all the time, as a
>direct result of anything being able to refer to anything.
Well, this is another discussion entirely, but I fail to see why
anything IS able to refer to anything, on the web or off it. Most
things do not refer at all, in fact.
>(so for example any formula being able to cite any other formula or itself).
?? How did that happen, again? Any formula OF WHAT can cite any
other? And even if you are right, and all formulae can cite all
other formulae (I really have no idea what that means, but never
mind) that only gives us what might be called universal powers of
citation; it doesn't provide anything like universal powers of
*reference*.
>We can't say that every formula is or isn't true.
Yet another discussion entirely. Suppose for the moment that any
formula can refer to any other. It doesn't follow that formulae don't
have a truth-value. (If any formula can assert the truth or falsity
of any formula, then indeed one can reconstruct the liar paradox, by
writing a formula which asserts of itself that it is false. That
still doesn't imply that formulae don't have truth-values, though it
could be taken to be prima facia evidence for that conclusion. But
the central problem there is being able to asserting that something
is false, not the act of reference itself. Truth-predicates are
indeed dangerous; the moral is not to confuse truth-values with
truth-predicates.)(See PS below)
>So we can't use set theory for the semantic web.
And even if formulae didn't always have a truth-value - which, to
repeat, fails to follow from the above line of reasoning about five
times over - that in turn wouldn't have any consequences for set
theory. It might have some influence over *which* set theory was
suitable, but it wouldn't rule out set theory as such. Its a bit
difficult to even know what that would mean: its a bit like saying
that we can't use language. Set theory in one form or another is
about as fundamental an idea as possible; it underlies all of
language and all of mathematics. Its just the idea of being able to
talk about 'collections' in some very generic sense.
> That is a shame, as model theoretical analysis which some people
>feel defined eth meaning of langauges rather assumes set theory
>from the get-go.
All of mathematics (and therefore virtually all the exact sciences,
even the social sciences that use statistical methods) and virtually
all of linguistic analysis rather assumes it from the get-go. The
only exceptions I can think of are the ideas of a Polish school of
philosophers from the early part of the 20th century who developed an
alternative called mereology, and the idea of using category theory
rather than set theory as a foundation for mathematics. Both
interesting ideas, but I don't think they will be much more use to
you. Mereology has its own technical wierdnesses, and category theory
is way too mathematical for anything but mathematics.
It really is rather a crazy line to hoe, to try to claim that the web
must abandon set theory as an analytic tool. Its a bit like claiming
that science needs to abandon the idea of energy, or that medicine
should stop worrying about all this chemistry nonsense and return to
the clarity of Paracelcius, or that we should try to communicate
without making marks on anything.
> Nevertheless, people have tried it and guess what -- run up against paradox.
The only paradoxes that have arisen so far in a web context have been
a direct consequence of damn silly language engineering decisions
based, as far as I can see, on ignorance and incompetence. Layering,
for example, raises no paradoxes as such. Trying to do semantic and
syntactic layering in the same notation does, but anyone who knows a
modicum of logic could have predicted this, and also what to do about
it. Several of us did predict it and tried to do something about it,
but were overruled. Do not attribute these elementary problems of bad
design to set theory.
Webont is like a group of auto designers who have been told to design
a car which is both a two-stroke and a four-stroke. After shaking
their heads for a while they have got down to work, and have been
arguing about whether to put in two engines, or whether its best to
put in an oil filter to prevent the 2-stroke oil from clogging the
fuel injection system, or maybe to have two separate fuel systems.
Its not easy to decide what is the best or most rational way to
proceed. But one shouldn't conclude that there is something
fundamentally wrong with auto engineering; the real problem is that
the people who wrote the specs didn't really understand what 'stroke'
meant. (Under these circumstances, it is particularly irritating to
be lectured by the management on the need to be thinking outside the
box, and why we need to re-design the suspensions and do away with
the steering wheel. )
>But when you think about it, its obvious.
>
>The solution then is to remove the assumptions of set theory.
No, the solution is to attack these issues with a minimal degree of
professional competence. (By the way, which assumptions of set theory
would you suggest removing? The self-application stuff in RDF(S) that
started this thread arises when we throw out the axiom of foundation.
I have no trouble tossing out the axiom of choice, myself. Any other
candidates? For a brief summary of the axioms of ZF, see
http://www.cs.bilkent.edu.tr/~akman/jour-papers/air/node5.html.)
>Remove the assumption that every class has a compliment,
There is no such assumption in set theory. Complementation isn't even
mentioned in the ZF set theory axioms.
> leaving the fact that a B is a compliment of A being a statement,
>not an axiom.
It isn't an axiom.
>This is equivalent in axioms to *not* using classical first order
>logic with a classical NOT.
No, it isn't equivalent to that.
It might be equivalent, roughly, to a kind of negation comprehension
principle, ie that the negation of a property is itself a property.
But in an assertional logic, saying (not (P a)) is not saying that a
property of (not-P)-ness is true of a; it is saying that the property
of P-ness isn't true of a.
>Which is why I like the axiom style of what you are doing, Guha but
>i worry about the classical nature of it.
>
>The solution to the webont mess
There is no webont 'mess' other than that which has been produced by
Webont being required, by W3C charter, to attempt to conform to bad
design decisions that were imposed upon it by fiat, apparently by
people who did not understand the technical issues. The problems are
purely political, not technical. We already have three distinct
technical solutions which could be used, in any case, if we could
only get the committee to agree on one of them. There is nothing
intrinsically difficult or even challenging about designing a
language like OWL successfully, or about specifying a clear
relationship between OWL and RDF(S). The 'mess' arises from a naive
view of the universality of the RDF notation, apparently arising from
a confusion between semantic expressiveness and Turing computability.
>seems to need real changes. The "not" has to go. That means
>cardinality and complement have to go. Maybe more stuff.
This is dangerous nonsense. Sorry to be so blunt, but you do not seem
to know what you are talking about here, not what the consequences of
what you are saying really are. If cardinality has to go, then
arithmetic goes with it. Even on the web, it is probably going to be
quite useful to know that 1+1=2.
Getting rid of complementation in a set theory is not the same as
getting rid of propositional negation. Consider the assertion that I
am not a dog. All that says is that it is false to say that I am a
dog. The corresponding complementation asserts that a class of all
non-dogs exists, and that I am a member of it. The difference between
them is that assertion of existence of the class of non-dogs. That is
indeed a very odd claim to make: what are the boundaries of
non-doggishness? Are black holes non-doggish? Are unicorns
non-doggish? The only way to tell seems to be to ask the equivalent
questions about being a dog, and then use classical negation to swap
the answers. But the bare use of classical (or any other) negation
makes no claims about the existence of anything, class or otherwise:
it just says that some proposition isn't the case.
In any case, languages without classical negation have been
investigated thoroughly, and they have no useful properties in this
context. They do not avoid the old paradoxes (if you are still
worried about them; I would suggest it would be more useful to just
stopping worrying (**)) and they do not provide any useful increase
in expressiveness.
There might be a case for actually *increasing* the expressive power
of the logic by introducing a modal necessity operator. That would
then enable you to take advantage of a well-known mapping of
intuitionistic negation into the modal logic S4, in effect by
treating not-P as meaning necessarily-not-P. Intuitionistic negation
indeed does not satisfy the excluded middle axiom, ie there are P's
for which (P or (I-not-P)) does not hold.
>One can, of course, build up set theory in restricted areas. For a
>huge number of situations, there are limitations which allow it to
>work. This I think of as being another form of closed world
>assumption.
I think what you mean is that there are cases where one can identify
classical and intuitionist negation. Basically, intuitionist negation
is rather like taking not-P to mean not that P is false, but that it
can be positively *shown* to be false, using the same techniques that
are used to show that something is true. (Those techniques have to
not themselves use the law of excluded middle, of course; this can
all be axiomatized in various ways.) The problem that I have with
any such suggestion, however, is that it seems to quite reasonable to
want to be able to state simple negative facts, such as that I have
no money, or that my father's name was not Petrovich. In making such
negative assertions, I am not claiming to be able to *prove* them
mathematically: I'm just making a claim about simple facts, about the
way the world is. That is classical negation, pure and simple. What
is wrong with being able to do that? A lot of what we know about the
world is based on such negative knowledge, and it is is damn useful
stuff to have and to be able to use. In particular, it is very useful
to be able to conclude Q from (P or Q) and not-P, a point made by
many people from Aristotle to Sherlock Holmes.
> Suppose we subclass classes to classes of individuals which cannot
>be classes or properties.
That would be what are known as ur-elements, or individuals in a
traditional first-order logic. OK, suppose we do....
>and we subclass properties to those which are not properties of properties.
Traditional first-order properties. OK, suppose we do. We are now in
a traditional first-order stratified logic. Many people would feel
more comfortable talking only in this way, but there is no real
advantage, since the more liberal framework can be mapped into this
one.
>Then, while we can't write the axioms for DAML,
Well, actually, you can. DAML *is* stratified in this way. What you
can't write is the axioms for RDF(S).
>we can write a lot about people and telephone numbers and orders of
>mild steel. So within those restricted environments, the folks with
>set theory like first order systems can go use them. These are the
>stratified systems, and their kin.
Right (although they aren't really restricted 'environments', more
restricted modes of expression) ...
>But the general semantic web logic has to be more general,
.....can you say in what way? If we are more generous and allow
classes to contain classes and properties, and properties to apply to
properties, what then? That is the state that RDF(S) is in right now,
with a semantics (model theory) modelled on the Common Logic (nee
KIF) model theory.....
>and so cannot have 'not'.
Sure we can. CL has 'not', and also has about as unconstrained a
syntax as you could possibly want. You can apply anything to
anything, as many times as you want. Classes (unary relations in CL)
can contain (apply to) themselves, etc. etc. . It has classical
negation, and full quantifiers, and even quantification over
sequences, can describe its own syntax, etc.; and still it is a
classical logic, and still it is consistent and paradox-free. This is
OLD NEWS. Wake up and smell the coffee.
>DanC and i have been getting on quite well in practice using
>log:notIncludes, something which checks whether a given formula
>contains a given statement (or formula). It is a form of not which
>is very clean, formulae being finite things.
Sure, weak negations have their uses, as do strong negations.
However, that doesn't enable me to just say simple negative facts
like "I don't have any money" and have people draw the right
conclusions.
> Doesn't lead the Peter Problem.
The Peter Problem is actually the RDF problem. There isn't any real
problem there, only a technical snag which arises when you try to do
layering wrong. Moral: do it right. There are several ways to do it
right, but we can't do them all at the same time.
>In fact there are a lot of things (like log:includes and
>log:notIncludes) which are converses.The arithmetic operators like
>greaterThan, for example. Things typically defined with domains and
>ranges which have some sort of finiteness. So to a certain extent
>one can do notty things with such statements.
Right, as long as you can assume that things are finite, you can do a
*lot* of things that you can't do without that assumption. That's the
recursion theorem, in a nutshell. However it seems to me that the Web
is one place where that kind of assumption - what might be called a
recursively-closed-world assumption - *cannot* be expected to hold,
in general. Of course, when it can, then we ought to be able to cash
in on it, as it were. But we can't expect to build this into the
basic SW architecture.
> So there seems to be something to be said for declaring these
>things in schemas. So the question you ask about whether to talk of
>the class of people or the set of people
>might be useful to be answered, where it can be , by using the set.
>This allows more axioms to apply, allows different forms of
>inference.
>
>This is the way I feel it is going, but i don't have the experience
>to write you down the axioms.
>
>Mind you I might be able to guess which ones to remove from lbase.
lbase itself has no axioms. They arise only when you map other
languages into lbase.
>
>Tim
>
>PS: Union being something only with sets makes sense, as union
>axioms have to have a not in them.
?? Can you run that past us again? Why does union (= logical 'or')
necessarily involve negation?
Pat
(**) PS . Here's why you should stop worrying about the classical
Russell-type paradoxes. Briefly, they don't arise on the Web, if we
interpret things properly.
Suppose for the moment that you are right about universal 'citation',
in the sense that any document can point to any other document and
endorse or deny it. Then there is no way to prevent the following
situation arising: two documents A and B may exist, where A points to
B and endorses it, and B points to A and denies it. (Longer such
chains can be constructed, obviously, but they all boil down to
this.) Agreed that this can happen: but is this a paradoxical
situation? The answer to that depends on how you interpret the
endorsing and denying.
If we say that this mutual endorsement and denial is done essentially
in a metatheory, by referring to the other document and asserting of
it that it is true or false, then indeed this situation amounts to a
reconstruction of the liar paradox, and that is genuinely paradoxical
if we interpret those assertions of truth and falsity in their usual
sense, which is hard to avoid if we are indeed claiming to be using
the truth predicate, which is kind of required, by definition, to be
tightly connected to the notion of truth that is used in the
metatheory of the language itself.
However, that is not the usual way of interpreting endorsement or
denial, nor the most convenient or natural way. Suppose we use a more
natural reading, in which to endorse something is to simply assert
it, and to deny it is to endorse its (classical) negation. One way
to think of this is that A's pointing to B amounts to A's 'importing'
the content of B into itself, and B's denial of A is an importing of
not-A into itself, ie the content of A, but inside a negation.
(Another way is to think of it the way that Donald Davidson in his
essay "On Saying That" suggested we should think of indirect
reference, as a kind of demonstrative, where A says, pointing to B,
"I agree with that" and B says, pointing to A, "I deny that". ) Then
the situation described is one where the content of A includes B and
the content of B includes not-A, ie the content of them taken
together can be summed up by the propositional expression ( (A => B)
and (B => notA) ). Notice that there are no truth-predicates
involved in this, and no expressions are mentioned in a metatheory:
they are simply used with their ordinary meaning, using the ordinary
assumptions of the language. This is now logically equivalent to A
asserting P and B asserting not-P for some P; they simply disagree,
is all; so that taken together, the two assertions amount to a
contradiction. This is a sign of disharmony - they can't both be
right - but it is not even remotely paradoxical. The only way it
differs from a simple P-vs-not-P contradiction is that it takes one
or two extra inference steps to uncover. (Actually, depending on what
else is or isn't in the documents, you could come to the conclusion
that it is A that is making the contradictory claim here, and B is
just agreeing. But in any case, there is clearly nothing paradoxical
involved; and there is no way to prevent people from publishing
contradictions in any case, if they can use negation.)
The paradox is averted precisely by avoiding the apparently
innocuous, but in fact very dangerous, step of using a truth
predicate to make a simple assertion. Asserting P is not the same as
asserting true('P'), rather in the same way that a popgun isn't the
same as a Uzi. But there is no need to make this move into the
meta-language in order to simply assent or deny something that is
already in a form which admits assenting or denial; and no point in
doing so, since one has to immediately get back from the meta-level
once again in any case. If you avoid this dangerous and unnecessary
maneuver, there is no need to be worried about the liar paradox and
no need to feel uncomfortable with classical negation.
>
>On Friday, June 21, 2002, at 02:31 PM, R.V.Guha wrote:
>
>>Dan,
>>
>> There is a rather fundamental difference in the intended meaning
>>of rdfs:Class vs Sets. One comes from cognitive science and the
>>other from Math. rdfs:Class is intended to capture the concept of
>>"category" or "kind" as that term is used in cog-sci and not the
>>concept of Set. In Cyc, for example, we had different nodes
>>corresponding to Class and Set. (Quine wrote a very nice essay on
>>this topic [1]).
>>
>>Here are some examples that illustrate the difference. (V is the
>>union operator).
>>a) Both perspectives would include the concepts of Person and
>>Table. The rdfs:Class perspective would not include (Person V
>>Table). The Set perspective would.
>>b) DanC is an instanceOf Person. In the set perspective, he would
>>also be an instanceof (Person V SquareTriangle) and (Person V
>>SuperNovasOnEarth) ...
>>
>>Saying that rdfs:Class is the rdfs:Class of all rdfs:Classes does
>>not cause problems because we do not and cannot have a theory of
>>rdfs:Classes such as ZF set theory.
>>
>>Both these concepts are very useful and we need them both. But it
>>is important not to mix up the two. Both approaches are relatively
>>common, with the rdfs:Class approach being more commonly used in
>>large scruffy implementations and the set oriented approach being
>>more common in formalizations such as DLs.
>>
>>The important question is, which one do we use to describe concepts
>>like "Person"? My personal preference is for the cog-sci approach.
>>It is more pliable and fairly immune to logical nastinesses like
>>paradoxes. I would also argue that this robustness also makes it a
>>better choice for the SW.
>>
>>Guha
>>
>>[1] Quine, W. V. O. (1969). Natural kinds. In Ontological
>>relativity and other essays, pages 114--138. Columbia University
>>Press, New York, NY.
--
---------------------------------------------------------------------
IHMC (850)434 8903 home
40 South Alcaniz St. (850)202 4416 office
Pensacola, FL 32501 (850)202 4440 fax
phayes@ai.uwf.edu
http://www.coginst.uwf.edu/~phayes
Received on Tuesday, 25 June 2002 02:20:57 UTC