- From: Drew McDermott <drew.mcdermott@yale.edu>
- Date: Wed, 24 Dec 2003 20:48:39 -0500 (EST)
- To: www-rdf-rules@w3.org
[Bijan Parsia] > Sure, lots of semweb reasoning will be done by random Perl and Python > scripts (or fairly cheap prolog hacks). But there's some sort of > difference between acknowledging that and wanting to privilege "certain > kinds of processing mechanisms" for interoperability purposes. And > interop is somewhat the name of the game, I'm pretty sure. > I'm not thinking of little scripts and stuff. I'm thinking of big black-box algorithms, such as heuristic programs for bidding in combinatorial auctions. > [snip nice paragraph that seems straight out of Critque of Pure Reason] > > So, Drew, is there any evolution in your position in CoPR and that > paragraph? In your experience? Not much evolution, if any. [me] > > If you really stand by this, then there really is no difference in our > > positions. The assumption set, in this case, will include an > > assumption that "the algorithm did not err on this occasion." How > > would one check that without reopening the original question? > [snip] > > Er...isn't the difference that you think the assumption isn't checkable > (in fact) whereas Pat thinks that it is? Pat's paragraph is subject to multiple interpretations. If he means: an algorithm might cut all sorts of corners, but must in the end produce a proof of its conclusions to accompany those conclusions, then the assumptions would be checkable. I've read it a couple of times, and I can't tell if he meant that or not, especially in this sentence: "Nothing in the semantic specification of the language requires that all reasoners only perform valid inferences." Maybe Pat himself will tell us. -- Drew -- -- Drew McDermott Yale Computer Science Department
Received on Wednesday, 24 December 2003 20:48:41 UTC