Re: FAQ: stratified class hierarchies vs. RDFS

>  On Tuesday, June 25, 2002, at 02:20 AM, pat hayes wrote:
>
>>>Guha,
>>>
>>>Responding completely intuitively without a set of proofs to match,
>>>
>>>It seems that the fundamental difference between classes and sets
>>
>>Please, there is no fundamental difference between classes and 
>>sets. This is why I reacted to Guha's message. This isn't the right 
>>way to talk (or think), because it doesn't make sense . Classes and 
>>sets aren't like cows and sheep. Classes *are* sets (well, class 
>>extensions are sets, to be exact); the difference is just that not 
>>all sets are classes. Its more like cows and Jersey cows.
>>
>
>Yes exactly... though I understood that it was that not all classes 
>are sets. Sets are well-behaved things, and classes not?

Well, I honestly do not quite know what Guha was meaning in his 
message which started this. I don't know of any notion of 'class' in 
which a class isn't a set (or has an extension which is a set, or 
some other technical variation). (Well, except the Bernays notion of 
a 'proper class', but Im sure that isn't what Guha meant.)  So if 
'classes' here aren't well-behaved, I really do not know what Guha 
meant by that. Whatever it was, I don't see how it relates to 
whatever Quine was talking about. (Since Guha is off-web at present, 
this isn't quite fair as he isn't able to respond. Sorry.)

>>>is that in set theory (as PPS said in his recent tower of babel 
>>>paper) that one expects of course a well defined membership 
>>>function for any set.  For any object and any set, the object is 
>>>or is not in the set.
>>
>>Right. Similarly for whatever Guha is calling classes, unless he 
>>(or somebody) is going to do some remarkable new work in the 
>>foundations of meaning. For example, something either is or is not 
>>a rock, say, or a cow, or a herbaceous border. (Which is not to say 
>>that there isn't room for disagreement in any of these cases, of 
>>course, even about what the category names mean. Its just a way of 
>>saying that once you have decided what the category names *do* 
>>mean, and what the facts *actually are*, that you can express that 
>>mutual understanding in terms of what things you count as being in 
>>what collections of things.)
>>
>>>This is a technique we can use so long as we restrict ourselves to 
>>>talking about people.
>>
>>??It has nothing to do with people. Members of sets can be 
>>absolutely anything.
>
>Sorry, when I said  "people", I meant what you meant when you said 
>"rock, say, or a cow, or a herbaceous borders".
>
>(But not classes)

No, if classes exist then they can be in sets. Z-F set theory is 
about as generous and unrestrictive as any theory can possibly be: it 
allows sets to be sets of ANYTHING.

>
>>>We have avoid the self-reference.
>>
>>??avoided what self-reference? There isn't any self-reference 
>>problem here. The issue that started this thread was about a set 
>>containing itself as a member, not about something referring to 
>>itself.
>>
>
>It seems there is a strong analogy between the Russel paradox for 
>sets and the liar paradox with self-reference.

Indeed, Russell used the liar paradox as an inspiration. All these 
tricky results, including Goedels theorem, are variations on the 
liar. But there are devils in the details, see below.

>
>When you say,
>
>"Consider the set of all sets which [are not members of themselves]"
>
>You could be read as saying,
>
>"Consider the set of x such the following statements are true:
>   [ x is not a member of itself]".
>
>I put the [brackets] around the last bits you don't need to consider 
>the paradox case to see the analogy.    Assuming that classes have 
>well-defined membership functions is the same as assuming that each 
>of a parameterized set of statements is true or false.

Its analogous to it, but its not the same. It becomes the same if we 
assume that every open sentence (ie with one variable) defines a set. 
That is the comprehension principle which Frege had in his original 
1895 set theory and which Russell used to explode it. All set 
theories since then have been very, very careful to NOT have an 
unrestricted comprehension principle.

>This is where the paradox breaks down, when you do not assume that 
>the class is well defined, or that every sentence must be true or 
>not true.

That route has been tried, and it doesn't get rid of the paradox, 
only delay it. (You just have to define the class of classes that 
either not contain themselves or are undefined, basically. No matter 
how you wing it, there's going to be some notion of 'un-true' which 
can be used to re-create the paradoxical case. Quine says somewhere 
that you can't get rid of classical negation, really, you can just 
re-name it. If you reject the excluded middle then you don't change 
classical negation, you are just talking about something else.)

However, there are routes that DO get rid of the paradox, and the 
best one seems to be to give up on the idea that any open sentence 
defines a set(/class), i.e. give up on the comprehension principle. 
That leaves your logic undamaged, it only effects your set/class 
theory (which most people don't particularly want to use in any case) 
and it seems to make intuitive sense. And you only have to give it up 
a leeetle bit, eg the KIF 'wtr' trick allows you to make almost any 
sentence define a set; you just have to be careful about sentences 
that contain 'not a member of' . The world can get along pretty well 
without ever talking about the set of things that aren't members of 
something, it seems.

>>>In a web-like context, self-reference comes up all the time, as a 
>>>direct result of anything being able to refer to anything.
>>
>>Well, this is another discussion entirely, but I fail to see why 
>>anything IS able to refer to anything, on the web or off it. Most 
>>things do not refer at all, in fact.
>
>Sigh. We are just not communicating here.
>Any tme you use a URI twice in different places, a reference can be 
>thought of as being made. When you use an HTTP URI then you are 
>using one of an unbounded set of URIs which are mentioned in the 
>specification of HTTP, which you are required to agree to when you 
>play on the semantic web. A lot of those things are documents.

OK, sure. If you had said that any *document* can refer to anything, 
I wouldn't have squawked.

>  I know you have a problem with any real-world meaning being carried by the
>predicates. But that is how the layering is done.  But that was a 
>different thread.

Right, that is orthogonal. And in any case I think I now get your 
point on that thread and kind of agree (more on that later.)

>If you don't accept the use of a URI as a reference to that which it 
>identifies,

Well, I do have (and always have had) a serious problem with the 
conflation of URL-type 'reference', ie basically following a file 
path to something in a computer, and URN-style 'reference', which is 
just naming or denotation. These seem quite different to me, and it 
seems silly to get them muddled when they weren't muddled in the 
first place. I can use names to refer to things like planets or 
galaxies or people or integers, none of which can possibly have URLs. 
But this whole issue may be at a tangent to the present conversation.

>then discussion of layering quite out of the question -- we would 
>still not have a basic philosophy of specifications nailed.
>
>>>(so for example any formula being able to cite any other formula or itself).
>>
>>?? How did that happen, again? Any formula OF WHAT can cite any 
>>other?  And even if you are right, and all formulae can cite all 
>>other formulae (I really have no idea what that means, but never 
>>mind) that only gives us what might be called universal powers of 
>>citation; it doesn't provide anything like universal powers of 
>>*reference*.
>
>(If a formula is expressed in RDF/XML and on the web, then you can
>create an Xpath expression for it.
>You can define an RDF property which relates an xpath expression
>to the formula it parses to within the context of the document it came from.
>So you can construct a reference to that formula.  People have done it and
>will do it.)

Im sure people do this kind of thing with Xpaths, don't get me wrong. 
But it isn't obvious that this is best called a *reference*. Or at 
any rate, we need to be careful about what exactly that means, to say 
that an Xpath expression expressed in RDF (btw, how did *that* 
happen, exactly? Wouldnt the engine need to use some extra-RDF 
knowledge to make the connection?) is a 'reference to' the formula. 
Sure its a way of accessing or reconstructing the formula, but it 
still may not be best considered as *naming* the formula. And OK, Im 
being a bit scholastic here; but really, we have to be a bit careful 
with this kind of terminology, since distinctions like 
contradiction/vs/paradox turn on this kind of distinction.
Im not being pedantic for pedantry's sake: its more like wanting to 
keep hold of the intellectual handrails, because I'm scared of 
falling.

>
>>>We can't say that every formula is or isn't true.
>>
>>Yet another discussion entirely. Suppose for the moment that any 
>>formula can refer to any other. It doesn't follow that formulae 
>>don't have a truth-value. (If any formula can assert the truth or 
>>falsity of any formula, then indeed one can reconstruct the liar 
>>paradox, by writing a formula which asserts of itself that it is 
>>false. That still doesn't imply that formulae don't have 
>>truth-values, though it could be taken to be prima facia evidence 
>>for that conclusion. But the central problem there is being able to 
>>asserting that something is false, not the act of reference itself.
>
>I am happy about that.  I am happy with saying something is true.  I 
>just haven't yet figured out what it means to say something is false.

It means it isn't true. :-)

>  (Except for a wide class of statements which have converses, such 
>as  "a>b". )
>
>>Truth-predicates are indeed dangerous; the moral is not to confuse 
>>truth-values with truth-predicates.)(See PS below)
>>
>>>So we can't use set theory for the semantic web.
>>
>>And even if formulae didn't always have a truth-value - which, to 
>>repeat, fails to follow from the above line of reasoning about five 
>>times over -  that in turn wouldn't have any consequences for set 
>>theory. It might have some influence over *which* set theory was 
>>suitable, but it wouldn't rule out set theory as such. Its a bit 
>>difficult to even know what that would mean: its a bit like saying 
>>that we can't use language. Set theory in one form or another is 
>>about as fundamental an idea as possible; it underlies all of 
>>language and all of mathematics. Its just the idea of being able to 
>>talk about 'collections' in some very generic sense.
>>
>
>Ok, so you are saying one can use model theory but avoid assuming 
>that for example every class has a complement.

Right. But classes not having complements isn't the same as sentences 
not having truthvalues.
>
>>>[..]
>>>  Nevertheless, people have tried it and guess what -- run up 
>>>against paradox.
>>
>>The only paradoxes that have arisen so far in a web context have 
>>been a direct consequence of damn silly language engineering 
>>decisions based, as far as I can see, on ignorance and 
>>incompetence. Layering, for example, raises no paradoxes as such. 
>>Trying to do semantic and syntactic layering in the same notation 
>>does, but anyone who knows a modicum of logic could have predicted 
>>this, and also what to do about it.
>
>What you are saying is clearly (if you don't mind me adopting your 
>tone to set a balance

Fair enough. (While we are on the topic, I ought to have already 
apologised for the, er, tone. Sorry I get heated about this stuff.)

>) complete nonsense and obviously so.  There are so many examples of 
>layering using the same syntax which actually are crucial to the 
>engineering you are using to reply to this note.
>The C language, and the C language with run-time library. Same 
>syntax, extra semantics, extra power.

No no, wait. Layering one programming language on another is one 
thing (actually several things.) Layering one logical (descriptive, 
assertional) language on another is a different thing. That is the 
RDF/OWL layering snafu, getting those different things muddled up. It 
would be easy to make OWL a programming-style extra layer on RDF. It 
would be easy to make it a logical extension. Its close to impossible 
to have it be both at the at same time, without some kind of fudging.

>Have you never come across two logics with the same syntax but where 
>one has a subset of the other's axioms?

Sure, of course, though we don't usually call that a different logic. 
But we can prove that  OWL (or DAML) can't be layered onto RDF in 
*that* way: that follows from Herbrand's theorem (or from the 
completeness theorem, if you prefer.)

>  Take any logic, subtract one axiom, and you have another logic 
>which can be regarded as a layer underneath it. So, by example, 
>layering is possible, even though those who know a modicum of logic 
>might have predicted that it were not.

Axiom-extending layering in this sense is trivial. Layering in the 
sense in which FOL is layered on propositional logic, where the upper 
layer extends the *grammar* of the lower layer, is pretty easy. 
Layering where the lower layer defines a virtual machine which 
interprets the upper layer is routine, and the C/C-library kind of 
layering is easy. But these aren't all the same notion; and that is 
the RDF layering problem, because some folk want it to be one or the 
other, and some want it to be all of these at the same time.

>It is, in a more general way, layering which allows IP to run over 
>ethernet, TCP to run over IP, and HTTP to run over TCP.  Does the 
>introduction of the TCP spec make the IP spec invalid?  It sure 
>increases the number of inferences you can make about a packet.  but 
>it adds axioms, not deletes them.

It doesn't add axioms because IP isn't an assertional language. There 
are a whole lot of different notions of layering all getting muddled 
up here.

>So while some may try to blame the concept of layering for the 
>presence of a paradox problem in OWL, I would look elsewhere.

Well, first, there isn't a paradox in OWL itself; but in any case, 
the question is, WHICH concept of layering? That is the issue. There 
isn't any difficult technical layering problem as such. We can do it 
any number of ways, but we have to choose which way to do it. We just 
can't do it all ways at once.

>Specifically, I would look to the logic of the language, which seems 
>to have acquired  a "not"  in various forms (such as cardinality and 
>complement) in daml+oil.   There is a built-in axiom that the class 
>exists which is has itself as a member and has no members.

(Did you mean no *other* members?). But DAML itself is perfectly 
consistent, and so is OWL, and so is RDF. The snag arises when we 
have to *implement* the others in RDF and also have them 
*semantically extend* RDF at the same time. To repeat, either is 
easy, but it's very hard (impossible?) to do them both.

>You say this is a product of the syntax used. Well, what you are 
>saying must make sense in some form.

Thats very gracious of you.

>It seems that daml+oil should be stripped of these, until they can 
>be introduced in a guarded fashion.  This leaves us with a positive 
>logic except in certain specific cases, where certain predicates 
>have converses, and classes have complements within a specific 
>larger set.

I'll agree with you there. I would prefer DAML to have a relative 
complement rather than a simple complement operator. Still, it would 
make the language more complicated, and its complicated enough 
already, one could argue. But look, all this has to do with 
*classes*, not with the logic itself. Theres no need to restrict 
ourselves to a positive logic: that would rule out things like 
log:implies, in any case.

>(I understand for hallway conversations that such forms of logic are 
>rather fringe for the logic community, and may be considered to 
>researchy,

Well, its more that they are too boring to want to do any further 
work on. Multivalued logics were all the rage in the 1930s. (I once 
started to write a PhD thesis on 3-valued logic but kept falling 
asleep.)

>That would suggest one should split the simplified positive logic 
>from the rest at this point, so we can get on with engineering 
>things which don't need the "not".)

Really, "not" is harmless. There is NO PROBLEM with "not". In fact, 
it is very hard to do without "not" in one form or another.

>(It is possible that the web-like constraints are in fact new ones 
>in many ways.

I agree there are new aspects, but I don't think they are likely to 
require new logics. At any rate, that should be an option of last 
resort. At one time people thought that quantum theory needed a new 
logic, but that turned out to be a mistake. It is really *very* hard 
to alter basic logic without getting into a hell of a, er, logical 
mess.

>  here we have all these machines which will go and read documents 
>and absorb new knowledge, new axioms, at every stage.

Well, isn't that what people have been doing now for a long time, 
after all? And the logics that we are talking about are all distilled 
from what human beings find so reasonable that they can't think of 
any counter-examples.

>Maybe we should have said in the first place that "RDF is a 
>framework for an infinite set of logics, where for every identifier 
>which can be used as a predicate there exists a set of axioms, and 
>subset of which may be known by a given agent, and of which 
>typically a subset of which are expressed in RDF".)  Maybe the whole 
>starting of the webont group was just ill-specified.
>
>>  Several of us did predict it and tried to do something about it, 
>>but were overruled. Do not attribute these elementary problems of 
>>bad design to set theory.
>
>So you would call it bad design, I assume.

Well, maybe I got a little testy there. But I think the Webont 
charter has some decisions incorporated into its wording that are bad 
design, yes, and those have been giving us many (all?) of the 
problems.

>
>>Webont is like a group of auto designers who have been told to 
>>design a car which is both a two-stroke and a four-stroke. After 
>>shaking their heads for a while they have got down to work, and 
>>have been arguing about whether to put in two engines, or whether 
>>its best to put in an oil filter to prevent the 2-stroke oil from 
>>clogging the fuel injection system, or maybe to have two separate 
>>fuel systems. Its not easy to decide what is the best or most 
>>rational way to proceed. But one shouldn't conclude that there is 
>>something fundamentally wrong with auto engineering; the real 
>>problem is that the people who wrote the specs didn't really 
>>understand what 'stroke' meant. (Under these circumstances, it is 
>>particularly irritating to be lectured by the management on the 
>>need to be thinking outside the box, and why we need to re-design 
>>the suspensions and do away with the steering wheel. )
>
>It seems there must be a lot of communication problems still.
>
>To me it looks as though the group was asked to make a car without 
>and engine, and then to make an engine, which would fit in the car. 
>"Oh no!" you cry, you asked us to design a car without an engine! 
>We couldn't make an engine for it or it just  WOULDN"T BE A CAR 
>WITHOUT AN ENGINE any more! Can't you see???!!!"    And here now the 
>"engineers" are coming to the "management" and telling them how 
>stupid they are ...

Well, maybe my metaphor was rather colorful, but I honestly cannot 
follow this version of it. The two engines were the two ways of 
fitting OWL together with RDF, and so "no engine" would be OWL's 
being free to invent its own syntax and ignore RDF, which is 
explicitly forbidden by the Webont charter.

>>>Remove the assumption that every class has a compliment,
>>
>>There is no such assumption in set theory. Complementation isn't 
>>even mentioned in the ZF set theory axioms.
>
>We have to remove it from DAML.  The PPS problem relies on (among 
>other things) the assumption that a class exists for every 
>combination of restrictions.

Yes, but that isn't there because of DAML's language constructs: its 
there because there doesn't seem to be any other way to arrange that 
the RDF(S) entailments that are required in order to support one 
notion of layering (in particular, the use of daml:list style 
encodings of DAML syntax) are in fact valid entailments (as required 
by the other notion of layering). That is where the trouble arises, 
and it would happen with any higher language that uses any syntactic 
construct larger than a single triple.

>
>>>  leaving the fact that a B is a compliment of A being a statement, 
>>>not an axiom.
>>
>>It isn't an axiom.
>>
>
>Good.  But the PPS "paradox" isn't a pardox unless each line of his 
>ntriples can be derived from, or is, an axiom.

The paradox comes when he invokes a rather generous comprehension 
principle to make the syntactic layering semantically correct. If he 
didn't do that, the paradox would vanish immediately. He only does 
that because the Webont charter is worded in a way that seems to 
force him into doing it. Peter knows this is a kind of reductio ad 
absurdum: he isn't seriously suggesting that we do layering in a way 
that produces paradoxes: he's just saying that the *way* of doing 
layering that seems to be forced on us by a direct reading of our 
charter will force us to use techniques which produce paradoxes. 
There is no fundamental problem with layering itself, eg if OWL's 
syntax could extend RDF syntax there would be no problem, or if OWL 
were free to re-write RDF meanings like N3 does, there would be no 
problem.

>
>>>[...]
>
>>>seems to need real changes. The "not" has to go. That means 
>>>cardinality and complement have to go. Maybe more stuff.
>>
>>This is dangerous nonsense. Sorry to be so blunt, but you do not 
>>seem to know what you are talking about here, not what the 
>>consequences of what you are saying really are. If cardinality has 
>>to go, then arithmetic goes with it. Even on the web, it is 
>>probably going to be quite useful to know that 1+1=2.
>
>Do you think you could create a system in which 1+1 was two, and I 
>could say that your remarks were true (assuming we end up agreeing

(Your diplomacy, in reply to my rudeness, makes me blush.)

>) and the system will not fall over by the PPS problem?

Sure. All we need to do is to find some way to arrange things so that 
when some RDF triples are used to encode some OWL syntax, that those 
triples are not required to be asserted in the same way as other RDF 
triples are. We need to somehow free up part of an RDF graph to be 
used to do implementation-style encoding, and not have it, as a 
side-effect, make assertions that get in the way of the OWL 
interpretation of the syntax. There are many ways to do this (all 
with snags): using 'contexts' of one kind or another to hide them in 
(but RDF doesn't have contexts); use uriref pointers to other RDF 
documents and dereference them cleverly, as Jos does in Euler (my 
personal favorite; but that would mean that an OWL kb was a set of 
RDF documents, which  many folk dislike); use some kind of syntactic 
marker to 'switch off' the RDF content (the dark-triples 
front-runner); or just, as Drew McDermott suggests, ignore it 
altogether and admit that OWL/DAML isn't, strictly, RDF (which is 
widely thought to violate our charter.)

>>Getting rid of complementation in a set theory is not the same as 
>>getting rid of propositional negation.
>
>The analogy is the law of the excluded middle.  Classically,  if p 
>or not p is an axiom, we have trouble.

Well, I don't see any trouble from that. On the contrary, I'd say: 
NOT having that gives rise to all kinds of trouble. For example, 
almost every inference rule ever invented boils down to the idea that 
if you know (not P) and you also know (P or Q), then you can infer Q. 
And without something like a principle of excluded middle, this is a 
non-starter.

>This is very similar to the problem in which anything must either be 
>in a class of not be in a class.  They may be quite different, but 
>they play the same role in the construction of the paradox.

No, really, they don't. Excluded middle isn't at all paradoxical. The 
thing in sentences that corresponds to the Russell case is more like 
self-application where you say things like P(P), ie P is true of P, 
and then you negate. And even *that* isn't paradoxical, it turns out, 
if you are a bit careful about what counts as a predicate.

>
>>  Consider the assertion that I am not a dog. All that says is that 
>>it is false to say that I am a dog. The corresponding 
>>complementation asserts that a class of all non-dogs exists, and 
>>that I am a member of it. The difference between them is that 
>>assertion of existence of the class of non-dogs. That is indeed a 
>>very odd claim to make: what are the boundaries of non-doggishness? 
>>Are black holes non-doggish? Are unicorns non-doggish? The only way 
>>to tell seems to be to ask the equivalent questions about being a 
>>dog, and then use classical negation to swap the answers. But the 
>>bare use of classical (or any other) negation makes no claims about 
>>the existence of anything, class or otherwise: it just says that 
>>some proposition isn't the case.
>>
>
>Isn't the classs of non-dogs defined as those x for which "x is a 
>dog" is not true?

Sure, but the question would be, do you want to allow such a 
definition to in fact define a class? The point I was making was that 
you aren't forced into admitting that everything you can say must be 
a definition of a class; in fact, you probably DONT what to say that.

(BTW, the reason I am so confident here is that the restrictions that 
guarantee consistency are needed anyway for pragmatic and 
computational reasons. Unrestricted comprehension is a computational 
nightmare, since almost any expression can match almost any predicate 
or relation variable. Suppose for example you were trying to prove 
that Joe and Bill had some property in common, which sounds like a 
sensible question to ask. But if we have unrestricted comprehension, 
then the property of either being Bill or being Joe:
(lambda (?x)( ?x= Bill or ?x=Joe))
will do the job. And there's no way of saying that that is a 
ridiculous property, since obviously the expression it is made from 
is a perfectly good sentence. In more complicated reasoning, you find 
yourself searching through properties like having more hairs than the 
oldest plumber in Ohio, or weighing more than a Chinese carpet.)

>Classically, it must be either true or not true for any x. The Class 
>has a complement. A black hole is not a dog. A unicorn is not a dog. 
>I don't know that "dog" was  a good example.

Sure, non-dogs is probably a pretty harmless set, in practice. But 
non-anythings is always a bit tricky as a set specification, since it 
seems so obviously open-ended. Its a bit like using a selection tool 
in Photoshop that means 'all the rest of the picture' when you don't 
really know where the edges of the picture are.

>
>>In any case, languages without classical negation have been 
>>investigated thoroughly, and they have no useful properties in this 
>>context. They do not avoid the old paradoxes (if you are still 
>>worried about them; I would suggest it would be more useful to just 
>>stopping worrying (**)) and they do not provide any useful increase 
>>in expressiveness.
>>
>>There might be a case for actually *increasing* the expressive 
>>power of the logic by introducing a modal necessity operator. That 
>>would then enable you to take advantage of a well-known mapping of 
>>intuitionistic negation into the modal logic S4, in effect by 
>>treating not-P as meaning necessarily-not-P. Intuitionistic 
>>negation indeed does not satisfy the excluded middle axiom, ie 
>>there are P's for which (P or (I-not-P)) does not hold.
>
>So I won't understand how to build this thing without studying modal 
>logic, eh?

Well, its easier than intuitionistic logic, believe me. And there are 
independent reasons for getting involved with modalities (tenses, A 
said that B, rigid names, etc.) in any case.

>>>One can, of course, build up set theory in restricted areas.  For 
>>>a huge number of situations, there are limitations which allow it 
>>>to work.  This I think of as being another form of closed world 
>>>assumption.
>>
>>I think what you mean is that there are cases where one can 
>>identify classical and intuitionist negation. Basically, 
>>intuitionist negation is rather like taking not-P to mean not that 
>>P is false, but that it can be positively *shown* to be false, 
>>using the same techniques that are used to show that something is 
>>true. (Those techniques have to not themselves use the law of 
>>excluded middle, of course; this can all be axiomatized in various 
>>ways.)  The problem that I have with any such suggestion, however, 
>>is that it seems to quite reasonable to want to be able to state 
>>simple negative facts, such as that I have no money, or that my 
>>father's name was not Petrovich.
>
>What do you mean, in the web context, by the fact that his name is 
>not Petrovitch?

The same as it means in any other context. Facts don't stop being 
facts just because they are on the Web.  The Web is just a lot of 
interconnected documents, and documents have been around for a very 
long time.

>  Your father, to whom I refer here as Petrovitch, may have many 
>names.  We can carefully define your meaning of "name" here so that 
>your points stands.  We can talk about a specfic name given to 
>Petrovitch when his birth was registered.

Right, I meant name in the ordinary sense. I didn't mean that it was 
impossible to refer to my Dad as "Petrovich",

>  In fact, we mean that the registration document does not say that 
>his name was "Petrovitch".

Well, it means a bit more than that. For example, my birth 
certificate says that my name is "Patrick John Hayes" , but the only 
person who calls me that is my mother. It would be more correct to 
say that my name is "Pat".

>  Now that is a fact we can get from parsing it.  That predicate, the 
>inclusion of a phrase in a birth registration, has a converse.  Like 
>many things which you want to be able to talk about -- that you 
>didn't buy an orange yesterday, and so on.  A whole world in which 
>classical logic works well. But it does exclude formulae about 
>formulae.

No, it really doesn't. Classical logic applies throughout all of 
mathematics, on the most abstract things you could imagine. Formulae 
are not all that recondite, in any case.

>So we should make use of this logic very natural, for these cases, 
>but not be able to use it on abstract things which will get us into 
>trouble.
>
>>  In making such negative assertions, I am not claiming to be able 
>>to *prove* them mathematically: I'm just making a claim about 
>>simple facts, about the way the world is.
>
>(If I said that to you I would not get away with it!  That's all RDF 
>is ... simple claims about how the world is ;-) )

Well, OK, once the meanings of the nonlogical symbols are somehow 
specified.  So, wouldn't it be nice to have 'not' in RDF-2 ??

>Your simple facts about "the way the world is" work classically 
>because they boil down to physical measurement of a thing, person, 
>and so on.

No, it works because the opposite of 'true' is 'false'. Thats nothing 
really to do with how the truths are *decided*.

>These measurement predicates  have converses.  The moment you start 
>getting abstract it fails to be so obvious. "I am a human" is easy. 
>"I am an optimist" is getting outside the range.

Well, the point is that whatever 'optimist' means, its still true 
that either Joe is an optimist or he isn't. Either he is a foodle or 
he isn't, and I don't need to know what 'foodle' means. All the 
apparently 'grey' cases turn out to be things like: Joe is an 
optimist *some of the time*, which is just a sign that the original 
statement was underspecified, or things like "Im not sure what 
'optimist' means exactly so I can't say if Joe is one or not", which 
still satisfies the excluded middle (it means something like exists P 
. P(Joe) and similar(P, Optimist), which is vague about P but doesn't 
involve giving up classical logic) and so on. There are some 
troublesome cases, I will admit, to do with how to express genuinely 
vague concepts, eg what counts as being 'in' a mountain range, or 
where 'the outback' starts exactly. But the central logical point is 
that whatever these things mean in any given context, the excluded 
middle - classical negation - still holds for that particular 
meaning. In fact, it holds because of the meaning of 'not', 
independently of the meanings of anything else.

>But as a lot of this stuff will be simply data about organges 
>(people, rocks, etc) the predicates will work with classical axioms. 
>The metadata about the terms "oranges" and so on will not, because 
>it is talking at an abstract level.

Hey, all of mathematics uses classical logic, and you can't get much 
more abstract than, say, category theory. It certainly applies to 
things like RDF schema.

>So schemas will have to do without classical logic to avoid PPS 
>paradoxes. But that is OK - you don't need to ask whether "orange" 
>has any money.
>
>>  That is classical negation, pure and simple. What is wrong with 
>>being able to do that? A lot of what we know about the world is 
>>based on such negative knowledge, and it is is damn useful stuff to 
>>have and to be able to use. In particular, it is very useful to be 
>>able to conclude Q from (P or Q) and not-P, a point made by many 
>>people from Aristotle to Sherlock Holmes.
>
>Aye, and puzzled as many people with the liar paradox.

The liar paradox isn't a product of 2-valued logic; its a product of 
self-reference and talking about truth. And what Goedel showed is 
that its the talking about truth that is really the key element 
(because you can get rid of the self-reference by using arithmetic: 
Goedel numbering). The moral is to be rather careful when you find 
yourself talking about truth. Obvious, when you think about it, when 
you can so easily say shoot-yourself-in-the-foot things like 'this is 
not true'.

>
>If I has started so naively you would have shot me down and told me 
>things were not so simple, my lad, or words to that effect.
>
>
>>>  Suppose we subclass classes to classes of individuals which 
>>>cannot be classes or properties.
>>
>>That would be what are known as ur-elements, or individuals in a 
>>traditional first-order logic. OK, suppose we do....
>>
>>>and we subclass properties to those which are not properties of properties.
>>
>>Traditional first-order properties. OK, suppose we do. We are now 
>>in a traditional first-order stratified logic. Many people would 
>>feel more comfortable talking only in this way, but there is no 
>>real advantage, since the more liberal framework can be mapped into 
>>this one.
>>
>>>Then, while we can't write the axioms for DAML,
>>
>>Well, actually, you can. DAML *is* stratified in this way. What you 
>>can't write is the axioms for RDF(S).
>>
>>>we can write a lot about people and telephone numbers and orders 
>>>of mild steel.  So within those restricted environments, the folks 
>>>with set theory like first order systems can go use them.  These 
>>>are the stratified systems, and their kin.
>>
>>Right (although they aren't really restricted 'environments', more 
>>restricted modes of expression) ...
>
>Different ways of looking at it.
>
>>>But the general semantic web logic has to be more general,
>>
>>.....can you say in what way? If we are more generous and allow 
>>classes to contain classes and properties, and properties to apply 
>>to properties, what then? That is the state that RDF(S) is in right 
>>now, with a semantics (model theory) modelled on the Common Logic 
>>(nee KIF) model theory.....
>>
>
>Which is broken, according to Peter.

Really? I havn't heard him say that, and I challenge him (or anyone) 
to justify such a claim. There's a published model theory for it, and 
an easy recursive reduction to what might be called a textbook 
version of conventional FOL. (Its true, Peter doesn't *like* the 
syntactic freedom that CL provides: like a lot of people, he is more 
at home in a more layered kind of world, for essentially 
computational reasons. But that is an aesthetic judgement, not a 
technical one.)
(BTW, just for the record: this freedom that Peter dislikes isn't 
imposed on RDFS by the model theory: its been there from day one, its 
in the old M&S quite explicitly. All the MT does is give it a 
mathematical description.)

>
>>>and so cannot have 'not'.
>>
>>Sure we can. CL has 'not', and also has about as unconstrained a 
>>syntax as you could possibly want. You can apply anything to 
>>anything, as many times as you want. Classes (unary relations in 
>>CL) can contain (apply to) themselves, etc. etc. . It has classical 
>>negation, and full quantifiers, and even quantification over 
>>sequences, can describe its own syntax, etc.; and still it is a 
>>classical logic, and still it is consistent and paradox-free. This 
>>is OLD NEWS. Wake up and smell the coffee.
>>
>>>DanC and i have been getting on quite well in practice using 
>>>log:notIncludes, something which checks whether a given formula 
>>>contains a given statement (or formula).  It is a form of not 
>>>which is very clean, formulae being finite things.
>>
>>Sure, weak negations have their uses, as do strong negations. 
>>However, that doesn't enable me to just say simple negative facts 
>>like "I don't have any money" and have people draw the right 
>>conclusions.
>>
>>[..]
>>>In fact there are a lot of things (like log:includes and 
>>>log:notIncludes) which are converses.The arithmetic operators like 
>>>greaterThan, for example.  Things typically defined with domains 
>>>and ranges which have some sort of finiteness.  So to a certain 
>>>extent one can do notty things with such statements.
>>
>>Right, as long as you can assume that things are finite, you can do 
>>a *lot* of things that you can't do without that assumption. That's 
>>the recursion theorem, in a nutshell. However it seems to me that 
>>the Web is one place where that kind of assumption - what might be 
>>called a recursively-closed-world assumption  - *cannot* be 
>>expected to hold, in general. Of course, when it can, then we ought 
>>to be able to cash in on it, as it were. But we can't expect to 
>>build this into the basic SW architecture.
>
>I'm not talking about making the whole world classical, just making 
>statements in  a given schema for a given ontology which allow 
>classical axioms to apply to it.
>
>Just because the web is large and web-like, that doesn't mean that a 
>document on it, when it wants to say that you don't have money, 
>isn't using restricted expressive power (if you like) or talking 
>within a range of concrete things like you and your pockets and 
>coins, and you can't use the law of the excluded middle in that 
>context.
>
>You either have money or you don't.  Why? Because there is an 
>algorithm for determining the question which terminates with a 
>binary answer.

No, its nothing to do with how the truth can be *determined* . Its 
because of what 'not' means.

However, this suggests to me that you indeed are thinking in terms of 
a constructive (intuitionist) notion of truth. For you, a sentence 
being true means that some algorithm can *determine* that it is true, 
and being false means that you can *determine* that it is false; is 
that right? Then indeed it might be the case that you couldn't 
determine the answer either way, and so you couldn't determine the 
truth of either P or of not-P. Now, let me ask you this question: 
don't you think it is reasonable to say that you could still 
determine that (P or not-P) was true, even if you didn't know whether 
P was true or not? Can't you tell that formula is true just by 
looking at it? Im not saying, notice, that you could detect which of 
the alternatives was the true one, only that one or the other of them 
must be. You could toss a coin to decide P, and it wouldn't matter 
because you'd get the same answer for (P or not-P). right?

If you find this argument unpersuasive, then you need to re-think not 
just negation but also disjunction. I don't mean to imply that this 
is impossible: there are constructive logics of this kind. But it 
involves a much more thorough overhaul of logic than just weakening 
classical negation. And I don't think there is any great utility in 
doing this, since you can get the same effect in a classical 
framework just by being explicit about that 'determines' idea: that's 
the modal logic trick I mentioned earlier. Introduce an explicit 'can 
be determined that' operator as a prefix, say D- . D-P means that you 
can *determine* that P is true; it says more than just that P is 
true.  Here are the basic axioms you need:

P implies not D-(not P)
D-P implies P
D-P implies D-(D-P)
(D-P and P entails Q) implies D-Q

That last one is really a rule or an axiom schema.
Now, while ordinary classical logic still applies here, the following 
is NOT always true:

???    D-P or D-(not P)    ???

which I think is close to what you want. (BTW, you get intuitionist 
logic from this by erasing the D- prefix from only the positive 
assertions, and incorporating it into the negation of the negative 
assertions, which is kind of crafty; then that *looks* like a denial 
of the excluded middle.) (You also have to be a bit careful with 
quantifiers, and there are several ways to do that. Intuitionistic 
logic is asymmetric again: it treats existentials as meaning 
D-exists, but universals are left alone.)

>  Of course, if we could be sneaky, I could  give you ten dollars and 
>say that you should consider it yours if and only if you have no 
>money.   Do you have money or not?  (assuming you stared with none). 
>Oops. We introduced a recursive function of ownership which 
>complicated the algorithm and gave
>us a paradox again.

Clever, but not a paradox. You just have to be precise about *when* I 
have no money. If I can take it while I have no money then I can take 
it. If I have to have no money *after* I take it, then I can't take 
it.

>  And just when it seemed we were talking about
>
>[...]
>>>
>>>Tim
>>>
>>>PS: Union being something only with sets makes sense, as union 
>>>axioms have to have a not in them.
>>
>>?? Can you run that past us again? Why does union (= logical 'or') 
>>necessarily involve negation?
>>
>>Pat
>>
>>(**) PS . Here's why you should stop worrying about the classical 
>>Russell-type paradoxes. Briefly, they don't arise on the Web, if we 
>>interpret things properly.
>>
>>Suppose for the moment that you are right about universal 
>>'citation', in the sense that any document can point to any other 
>>document and endorse or deny it. Then there is no way to prevent 
>>the following situation arising: two documents A and B may exist, 
>>where A points to B and endorses it, and B points to A and denies 
>>it. (Longer such chains can be constructed, obviously, but they all 
>>boil down to this.) Agreed that this can happen: but is this a 
>>paradoxical situation? The answer to that depends on how you 
>>interpret the endorsing and denying.
>>
>>If we say that this mutual endorsement and denial is done 
>>essentially in a metatheory, by referring to the other document and 
>>asserting of it that it is true or false, then indeed this 
>>situation amounts to a reconstruction of the liar paradox, and that 
>>is genuinely paradoxical if we interpret those assertions of truth 
>>and falsity in their usual sense, which is hard to avoid if we are 
>>indeed claiming to be using the truth predicate, which is kind of 
>>required, by definition, to be tightly connected to the notion of 
>>truth that is used in the metatheory of the language itself.
>
>You assume a truth predicate with a law of the excluded middle, and 
>with no truth predicate you assume no such law.

The truth predicate is just required to correspond to the actual 
truth conditions of the language, whatever they are. You get the same 
basic problems no matter how many truthvalues you have, though it 
takes more work to derive them.

(To be very exact: it is indeed possible to move to a very 'weak' 
3-valued logic - Kleene's logic - which can in a sense express its 
own truth-conditions. It has the characteristic that if any 
subexpression af anything is 'undefined' then the whole expression is 
'undefined'. The trouble with this is that the logic is so weak that 
it can hardly draw any useful inferences at all; its like classical 
logic with a hole in the middle, and everything falls down the hole. 
And as soon as you strengthen it enough to be useful, the problems 
reappear. This idea was popular for a while in the 1970s, but 
Feferman wrote a devastating critical analysis "Towards Useful 
Type-Free Theories", (JSL 1982, reprinted in a collection edited by 
R. L. Martin on 'Truth and the Liar Paradox', Oxford UP 1984) which 
pointed out the problems, and also by the way pointed out that you 
get the liar-paradox-style problems with truth-predicates even in 
constructive and intuitionist reasoning.)

>I would just allow you to use a truth predicate  where  true(p)  <-> p .

That doesn't make sense as written, but I think I know what you mean: 
if square brackets are quasi-quotes, then its: true([p]) <-> p, like 
Tarski's famous example, " 'Snow is white' is true iff snow is 
white", right? Sure, that is exactly what any truth predicate has to 
be like.

>You can do that much without problem.

No, you cannot! That is what Tarski and Goedel were fussing about. In 
fact, you can't do it consistently in any reasonably expressive 
language: the metalanguage must be more expressive than the language 
it is talking about. Adding a truth-predicate bumps the language up a 
metalevel and increases its expressive power. The paradoxes arise 
when you try to force this hierarchy into a single level. (Well, 
that's the 'standard' view, in any case. Several people have tried to 
find coherent frameworks which allow one to do without the strict 
Tarski heirarchy, eg Feferman. But its not something you can just do 
casually, you have to pick your way through the minefield.)

>It is when you have that p must be true or false you get a problem.

No, that is NOT the central problem. If you don't believe me, read 
Feferman (and Montague and Aczel).

>It is not the predicate itself - its what you assume must come with it.
>
>>However, that is not the usual way of interpreting endorsement or 
>>denial, nor the most convenient or natural way. Suppose we use a 
>>more natural reading, in which to endorse something is to simply 
>>assert it, and to deny it is to endorse its (classical) negation. 
>>One way to think of this is that A's pointing to B amounts to A's 
>>'importing' the content of B into itself, and B's denial of A is an 
>>importing of not-A into itself, ie the content of A, but inside a 
>>negation. (Another way is to think of it the way that Donald 
>>Davidson in his essay "On Saying That" suggested we should think of 
>>indirect reference, as a kind of demonstrative, where A says, 
>>pointing to B, "I agree with that" and B says, pointing to A, "I 
>>deny that". ) Then the situation described is one where the content 
>>of A includes B and the content of B includes not-A, ie the content 
>>of them taken together can be summed up by the propositional 
>>expression  ( (A => B) and (B => notA) ).  Notice that there are no 
>>truth-predicates involved in this, and no expressions are mentioned 
>>in a metatheory: they are simply used with their ordinary meaning, 
>>using the ordinary assumptions of the language.  This is now 
>>logically equivalent to A asserting P and B asserting not-P for 
>>some P; they simply disagree, is all; so that taken together, the 
>>two assertions amount to a contradiction. This is a sign of 
>>disharmony - they can't both be right - but it is not even remotely 
>>paradoxical. The only way it differs from a simple P-vs-not-P 
>>contradiction is that it takes one or two extra inference steps to 
>>uncover. (Actually, depending on what else is or isn't in the 
>>documents, you could come to the conclusion that it is A that is 
>>making the contradictory claim here, and B is just agreeing. But in 
>>any case, there is clearly nothing paradoxical involved; and there 
>>is no way to prevent people from publishing contradictions in any 
>>case, if they can use negation.)
>>
>>The paradox is averted precisely by avoiding the apparently 
>>innocuous, but in fact very dangerous, step of using a truth 
>>predicate to make a simple assertion. Asserting P is not the same 
>>as asserting true('P'), rather in the same way that a popgun isn't 
>>the same as a Uzi. But there is no need to make this move into the 
>>meta-language in order to simply assent or deny something that is 
>>already in a form which admits assenting or denial; and no point in 
>>doing so, since one has to immediately get back from the meta-level 
>>once again in any case. If you avoid this dangerous and unnecessary 
>>maneuver, there is no need to be worried about the liar paradox and 
>>no need to feel uncomfortable with classical negation.
>
>You can still say that A says false(B) and  B says true(A) and 
>consider that we just have disharmony.  It wasn't the truth 
>predicate, it was the law of the excluded  which you slipped in, 
>surely?

No. The problem comes from mixing up assertions, which just plain 
*are* true or false but don't talk about it,  with meta-assertions 
(which talk *about* truth). The point being that the truth-predicate 
isn't just any old predicate; it is required *as part of the very 
definition of the language itself* to faithfully reflect both the 
truths and the non-truths exactly (the <-> in your formula.). And its 
that exact correspondence that provides the rigidity which forces the 
situation into paradox. If we are just making simple assertions, they 
can just disagree. But that truth-predicate twists a not-true into a 
true-not, and then we are left chasing the truthvalues around in 
never-ending circles. Heres an analogy that just occurred to me: if 
you twist a piece of paper, the ends might not be aligned, but its 
still just a piece of twisted paper with two sides. But if you glue 
the ends together, it becomes a Mobius strip, and then its back *is* 
its front. The truth predicate is the glue.

And I didn't "slip" that law in, by the way; it is just there: its a 
consequence of what 'not' *means*.

>You can take Peter's paradoxixal 7 statements in RDF and say they 
>are not consistent.

Right, they are not consistent. What makes it paradoxical is that in 
order to get the layering to work in all ways at once, that 
contradictory formula has to be *derivable* (or so Peter claims: more 
exactly, he says that the same principles that would be needed to 
derive the cases we do want, such as his intersection inference, will 
also produce things like his 7-triple paradox.).

>There is no paradox.  That is what I originally thought when I 
>looked at it.   It is only when you indicate that there is some 
>axiom set in OWL which allows you to generate them from nothing that 
>you have a problem.

Well, the problem is how to generate the ones you must have, without 
also generating the ones you definitely do not want to have (like 
Peter's example); and that, in turn, is a problem in that context 
because the rules that specify which ones you get have to be 
stateable in RDF (not OWL), but there isn't enough context available 
in an RDF graph to state them coherently. You can't just say, infer 
all the container triples you need but don't infer any bad ones; you 
have to actually give some rules. That is a snag for Webont, but its 
nothing basic or fundamental, and its nothing to do with these 
weighty issues of truth or redesigning the foundations of logic. Its 
a nasty little technical problem caused by a wierdness in the RDF/OWL 
specification.

>(I agree that using two documents didn't change the problem in that way,
>Using two documents prevents people form going down the path of 
>first preventing self-reference, and then imposing some 
>stratification regime to prevent the loop, which regime then shows 
>up as impossible to impose on the web. When you look at two 
>documents, it is obviously now web-like to try to stratify it to 
>prevent it being cyclic)

I agree with you there. These loops will happen. My point was that 
they aren't anything to worry over too much. That is, they might pose 
a practical problem and require some new engineering ideas, but they 
don't really bring up these deep issues of truth and self-reference, 
and in particular they are not the liar paradox and do not require us 
to abandon classical logic.

Pat

PS. I owe you a response on the other thread.


-- 
---------------------------------------------------------------------
IHMC					(850)434 8903   home
40 South Alcaniz St.			(850)202 4416   office
Pensacola,  FL 32501			(850)202 4440   fax
phayes@ai.uwf.edu 
http://www.coginst.uwf.edu/~phayes

Received on Friday, 28 June 2002 01:20:59 UTC