- From: Jeff Dalton <jeff@inf.ed.ac.uk>
- Date: Thu, 22 Apr 2004 19:14:25 +0100 (BST)
- To: public-sws-ig@w3.org
Quoting Drew McDermott <drew.mcdermott@yale.edu>: > Well, sure. I didn't mean to imply that steps could be skipped under > some circumstances. Ok, good. It's just that in Craig Schlenoff's message asking about processes that must occur, one of his concerns was that an OWL-S implementation might "skip to the next step" if a condition isn't met: I have a process in my OWL-S representation that must occur. However, it has a condition associated with it. Somehow I want to tell the system to wait until the condition is true so that the process will occur. The process is in a sequence list, and I am afraid that an implementation of the OWL-S specification will skip to the next step in the sequence if the condition is not met at that point in the execution, which is not what I want to occur. So I wanted it pinned down. > > > There are other occasions when it might make sense to insert plan > > > steps to verify that a condition really holds. But an agent > > > obviously can't do this for every condition, or we'd have an > > > infinite regress of steps inserted to verify the preconditions > > > of previously inserted steps. > > > > Not necessarily. Perhaps a checking regress would terminate in > > steps that lack preconditions. :) > > Even if that were possible, we'd still have situations like this: > Suppose step A has two preconditions P1 and P2. We insert C1 and C2 > to check them, yielding, perhaps > > sequence(C1, C2, A) > > Isn't it possible that P1 became false during the execution of C2? Isn't that a problem anyway? Even with one condition, it might become false again after the check. The example I mentioned earlier was a check that a light's on (yes, it is), then the bulb blows. Which is why programming languages have locks and synchronization and database managers transactions. > My only point is that the theory of precondition checking is > nontrivial. Intuitions to the contrary are usually due to > aftereffects of AI courses in which it's assumed that P is true if and > only if it's recorded in the world model. Not robotics courses, I hope. :) -- Jeff
Received on Thursday, 22 April 2004 14:14:56 UTC