Re: What do the ontologists want

> >For an example, let me introduce a propositional logic and provide a
> >rule R which says that given triple <a,b,c> anyone may infer triple
> ><d,e,f>.  This logic is not very expressive; it does not even allow
> >conjunction in the premise:
> >
> >    <R, premise, RP>
> >    <RP, subject, a>
> >    <RP, predicate, b>
> >    <RP, object, c>
> >    <R, conclusion, RC>
> >    <RC, subject, d>
> >    <RC, predicate, e>
> >    <RC, object, f>
> >
> >Each of these triples is true itself, while also building a structure
> >for us.
> 
> How does this convey the meaning that you indicate, ie that <d,e,f> 
> can be inferred from <a,b,c> ? It simply says that some things exist 
> called 'R', 'RP' and 'RC', which stand in some undefined relationship 
> to a, b, c, and so on. The RDF data model provides no further 
> meaning, and the model theory for RDF provides no further meaning. So 
> no inferences are sanctioned.

Somehow we're not understanding each other about layering.  I don't
want Layer 1 (the RDF model) to provide semantics for
premise/conclusion/subject/predicate/object.  Layer 2 (this
propositional logic) defines those five terms (binds those symbols to
their meaning).  I don't understand model theory well enough say how
it does it formally, but I do know how to do it among people
implementing computer systems.  You write a specification and see that
at least two interoperable implementations are created.  I'm intrigued
by the idea that model theory can help achieve the necessary common
understanding (among the developers implementing the systems which use
those five terms).

(Actually, I expect Layer 2 to be in two modules, with
subject/predicate/object in one and premise/conclusion in the other.)

> 'conclusion'.  (You will also need to relate <a,b,c> to the three 
> triples with 'RP' in the subject, but I presume that this wil be done 
> by reification, so I won't dwell on it.) 

Reification of RDF statements of Layer 1 statements in a Layer 2
module, which you said this morning you had "NOOOOO problem with", I
believe.   Is it still okay with you?

> In other words, you have now 
> given those symbols a *logical* meaning: they have become part of the 
> logical syntax. This isn't RDF any more: it is something else, 
> implemented in RDF.

Yes.   If Layer 1 is the C language, Layer 2 is various C libraries.
Some people call printf part of C; that's like calling rdf:subject
part of RDF as far as I'm concerned.   It's perfectly okay from a
distance, but sometimes you need to say no -- that's really in a
higher layer.

> > I have a working system that uses a more complex version of
> >this, with datalog rules.
> 
> I'm sure you have. I do not mean to say this can't be done. My point 
> is only that your working system must embody some assumptions that go 
> well beyond the RDF model. So someone who wants to be interoperable 
> with you at the semantic level - someone who wants to excahange 
> content with you, share ontologies with you, whatever - had better 
> know those conventions, because just being RDF-compatible isn't going 
> to cut the mustard. If you use 'premise' and 'conclusion' and I use 
> 'implies', we arent going to understand one another. The fact that we 
> both use triples isnt going to be of any use at all, since the 
> meanings that matter at the content level arent encoded in the 
> structure of the triples..

I agree with the facts you state, but not their pessimistic tone.  On
the specific issue you state, I believe I can write something using
premise/conclusion which tells a system which only recognizes
premise/conclusion to understand 'implies' (and visa versa).  [ Well,
I need some more expressive than this logic, but you know what I
mean. ]  Its not an interlingua, but a straight-forward translation
mechanism. 

> >The big issue I see, and maybe this is what you're getting at, is
> >whether an agent is licensed to apply this rule simply by knowing
> >these 8 facts (no matter what we put in the place I put "R"), or if
> >some additional inference license is required.  But I think this is an
> >engineering problem, not terribly fundamental.
> 
> I think it is absolutely central. The point is that if these really 
> are 8 facts ,then they don't actually say what you want them to say. 
> The only way they can say that <abc> implies <def> is by being taken 
> *together* as a larger entity that itself is a fact: the fact that 
> (<a bc > implies <d e f>). This is the fact that is actually 'known' 
> here.  Those 8 triples just say , at best, that some structure 
> exists. They say nothing about inferrability or implication.

I don't say why you say "at best".  They say exactly that some
structure exists, which is important from the perspective of the
engine looking for rules it can use.  If it knows the denotation of
the five symbols I used to define the structure, it can make the
inference.

> This distinction is exactly what I meant by implementing one 
> structure in another. If you are here implementing a datastructure 
> which represents an implication, then these triples are not facts 
> about your domain (maybe they are facts about the structure; but you 
> weren't intending to make an assertion *about the structure*, but 
> about some logical relationship between <a b c> and <d e f>, right?). 
> As I said before, you can't have it both ways. If we are going to 
> give this language a semantics *which reflects the intended logical 
> meanings*, and those meanings are anything more complicated than 
> ground atomic binary relationships, then we must somehow have a more 
> complex syntax (encoded somehow) to hang those more complex meanings 
> onto; and the conventions that define that syntax then ARE the formal 
> language which carries meaning and content across the Semantic Web.

Yeah, you're right.

Step 1: Settle on ground atomic binary relationships with some
        protocol for assigning symbols and some syntax (the RDF
        Model).

Step 2: While people who are thrilled to have any kind of structuring
        with clear semantics are using that, settle on some vocabulary
        for a more expressive logic built on top of the RDF model (eg
        Horn logic).  Some tools from Step 1 will still work well;
        others will seem silly.  This is kind of like adding (views,
        triggers, rules, constraints) into SQL; some people really
        like it, and it begins to obsolete some application-level code
        -- others never touch it.  Many developers use it without
        knowing it.  (Oracle developers don't care which system tables
        happen to be views.)

Step 3: Keep trying to come up with better logic vocabularies if we
        can, or something.  Maybe their semantics can even be
        described as axioms in Step 2 logic, so we don't need to
        deploy new engines (except to get better performance,
        perhaps).

I personally wouldn't go public until after step 2, to avoid the first
migration problem, but "release early release often" does seem to work
awfully well.

> >The idea (I think) is that the languages are themselves just
> >ontologies.
> 
> That doesn't make sense. Really, it doesnt. What you could do, is to 
> use an ontology to describe the SYNTAX of another language, which is 
> in fact what you are doing in your example. But then you need to 
> somehow USE that syntax to say what you need to say, not just REFER 
> to it.

What is the difference between an ontology and a language?  They both
involve parties communicating using some shared notions about the
structure of their domain of discourse and some symbols with shared
denotation, as far as I can tell.  I think of an ontology as mapping
structured symbolic data to knowledge about objects in the domain,
which is about the same as a language if we get general enough.

I see both terms as generalizations of such terms as "database
schema", "file format", "network protocol".  Maybe I should stick with
terms like that until I get more face-to-face time with KR folk.
(However people establish shared meanings, e-mail doesn't seem to work
nearly as well as face-to-face.  But conversation e-mail works a lot
better than just reading.)

> >Since we're trying to develop an infrastructure which supports a
> >process of world-wide ontology development, where new ones can be
> >easily created, promoted, and compared against each other in any given
> >domain (presumable leading to new ones being created and widely
> >deployed) we'll be able to have our logic systems (such as DAML)
> >evolve and converge (to some degree) as well.
> 
> Sigh. You know, y'all at W3C really ought to find out something about 
> actual ontology development practice before saying things like this. 
> Have you got any idea how very NOT-easy it is to "create, promote and 
> compare" ontologies? And how having an underdefined, impoverished, 
> syntax makes this all so much HARDER, rather than easier?

How is an ontology different from any other idea or product?  There
are things which make markets efficient, like low barriers-to-entry
and cheap & accurate information, which just come naturally with the
goals of the Semantic Web (and to some extent the HTML Web).  I wasn't
thinking that RDF+(Higher Layers) would be a KR language that
intrinsically made ontologies easier to create (although I hope it is,
of course); just that the Semantic Web could in general facilitate the
human process.  Sorry for the confusion.

> KIF was an early attempt to be such an interlingua 'standard'. You 
> know what I think of that idea.

I just read your DAML IOW section about IEEE-KIF, and I think I
understand a lot better now.  Is there any current version of that
work available?

      -- sandro

Received on Saturday, 19 May 2001 00:32:04 UTC