Re: FOAF OWL DL

On 16 Jun 2008, at 17:10, Alan Ruttenberg wrote:

>
>
> On Jun 16, 2008, at 4:21 AM, Story Henry wrote:
>
>>
>> On 15 Jun 2008, at 22:31, Alan Ruttenberg wrote:
>>> I didn't suggest removing it. I suggested modularizing the  
>>> ontology so that the portion is OWL DL can easily be used without  
>>> having to hack anything. I suggested doing that in a way that the  
>>> OWL Full version remained the same, without making it more  
>>> difficult to keep two versions in sync, by using owl:imports to  
>>> have the   Full version include the portion that is OWL DL.
>>>
>>> In other situations, I have made suggestions, on the FOAF side of  
>>> things, for how to improve it, and on the OWL side of things on  
>>> how to make it possible for OWL2 to work with FOAF as is (or with  
>>> minor changes)
>>>
>>> What makes you think I want to harm FOAF?
>>
>> I don't see Tim suggesting you wanted to harm foaf anywhere in  
>> what he wrote.
>
> Here is what I read as such:
>
>>> there is no reason to remove this valuable (essential)  
>>> information from the ontology
>
> I took this as saying suggested removing essential information from  
> the ontology (a harm). If I've misinterpreted that, please excuse me.

FWIW, that's I how read it too.

Of course, I see it neither as valuable (essentail) nor sensible  
information, but that's a totally different issue :)

> To be clear,  I made no such suggestion. What I did suggest was  
> that there are more than one community of users on the Semantic  
> Web, and that there was a solution that could work for all. The  
> suggestion you and Tim make does not work for one of those  
> communities.
[snip]

And doesn't seem like generally good advice.

>> He is just suggesting that your reasoner do the selection between  
>> owl-full and owl-dl itself. That is how I do things with the  
>> Semantic Address Book at
>> https://sommer.dev.java.net/AddressBook.html
>>
>> I don't even use owl-light, but some subset of that that I feel  
>> comfortable with.

I wonder how you evaluate the correctness of that comfort. I find  
that even RDF leaves most users quite *un*comfortable in predicting  
the results of a query (much less determining what the query *should*  
be).

>> The rules I use currently are here:
>>
>> http://tinyurl.com/5gl8dl
>
>> (I explain in more details here
>> http://blogs.sun.com/bblfish/entry/ 
>> opening_sesame_with_networked_graphs
>> how I use this)

There's quite a bit known about the relationship between OWL and  
various flavors of Datalog. I see none of that informing your rules,  
so I'd be pretty concerned about using them. (After all, it is often  
hard to predict interactions and missing interactions).

""Anyway the above illustrates just how simple it is to write some  
very clear inferencing rules. Those are just the simplest that I have  
bothered to write at present.""

This is very misleading, IMHO. First it conflates rules of inference  
with conditionals (of a very certain sort) *in* a formalism. It's not  
clear at all to me that these rules are clear. For example, I think  
know what the filter in the second rules is for (actually, now I  
don't again...is there merely for optimization? or is it necessary to  
stop a recursion? I don't see how it would trigger an endless loop).


>> I will be adding more rules and so use more of owl as I go along.

Alan already pointed out that this is, at best, a very minority  
perspective.

>> Do as much reasoning as you have time for and as your inference  
>> engine is capable of. That is what we humans do all the time.
[snip]

Of course, this is already done. However, the big point about  
standards is to try to assure a base level of common functionality.

BTW, your sort of approach has be tried before and found wanting  
(e.g., many people working with LOOM, and there, MacGregor, I'm given  
to understand, tried really hard to be principled about how he bolted  
on additional functionality). That's not to say it won't work for you  
or your application, but it does suggest that we ought to be a bit  
less cavalier about recommending it without a deeper understanding of  
the potential pitfalls.

In general, we qua community need to become much more sophisticated  
in our analysis of problems and proposed solutions and in our *forms*  
of analysis and proposed solutions. I remember when "put everything  
in RDF" was the solution to everything, and I find that tone cropping  
up time and again. "Just throw some more rules on to hack your  
reasoner" is another (of longstanding).

Approximating theories from a more expressive formalism to a less  
expressive one (which is exactly what's being proposed) is a pretty  
active topic of research (and of practical effort, see DOLCE). It's  
not particularly easy or obvious. It's clearer the case that it's   
much bettered handled at the ontology level rather than at the  
reasoner level.

For example, given that most FOAF tools do BNodes smush (often  
violating the semantics) with custom code, what exactly is the value  
of having that property being declared inverse functional? I'm not  
saying that its valueless, just questioning it's overriding value. It  
might be that documenting a smushing algorithm would be a better idea.

(That all being said, I do hope that EasyKeys will work for bnode  
smushing, but you really have no idea how difficult it is to make  
such keys work out. It definitely requires some tricky stuff about  
bnodes but also care with datatypes:
	http://www.w3.org/2007/OWL/wiki/Easy_Keys
)

Cheers,
Bijan.

Received on Monday, 16 June 2008 16:47:41 UTC