Re: SEM: semantics for current proposal (why R disjoint V?) (fwd)

On April 7, Pat Hayes writes:
> >------- start of forwarded message -------
> >From: Ian Horrocks <horrocks@cs.man.ac.uk>
> >To: Dan Connolly <connolly@w3.org>
> >Date: Fri, 5 Apr 2002 18:33:48 +0100
> >Subject: Re: SEM: semantics for current proposal (why R disjoint V?)
> >Reply-To: Ian Horrocks <horrocks@cs.man.ac.uk>
> >
> >On April 5, Dan Connolly writes:
> >>  On Fri, 2002-04-05 at 07:08, Ian Horrocks wrote:
> >>  > On March 21, Dan Connolly writes:
> >>  > > On Thu, 2002-03-21 at 14:28, Ian Horrocks wrote:
> >>  > > > On March 21, Libby Miller writes:
> >>  > > > > >
> >>  > > > > > As noted in the design discussions for DAML+OIL, I don't
> >>  > > > > > see sufficient justification for making V disjoint
> >>  > > > > > from R.
> >>  > > > > >
> >>  > > > > > It seems silly not to be able to talk about the intersection
> >>  > > > > > of two sets of strings, or UniqueProperty's whose
> >>  > > > > > range is dates, or whatever.
> >>  > > >
> >>  > > > This means that any OWL reasoner has to take on responsibility for
> >>  > > > reasoning about types
> >>  > >
> >>  > > I gather when you say "OWL reasoner" you mean a complete
> >>  > > reasoner.
> >>  > >
> >>  > > I'm not very interested in such a thing.
> >>  > >
> >>  > > Regular old horn-clause/datalog reasoners
> >>  > > (with some built-in predicates like
> >>  > > string:lessThan and such) seem
> >>  > > to get me what I need pretty well.
> >>  >
> >>  > Dan,
> >>  >
> >>  > It seems that, on the basis of a few toy examples where using ad-hoc
> >>  > reasoning seems give the results you want/expect, you conclude that
> >>  > this will be appropriate/adequate for all applications.
> >>
> >>  No, just for an interesting class of applications.
> >>
> >>  By the way, if you consider
> >>  formalizing the operations of W3C[1]
> >>  to be a toy example, I'm interested to know what
> >>  sort of applications you would take seriously.
> >
> >I didn't intend to be pejorative - I was only referring to the examples
> >I have seen in email. It wasn't completely clear to me from [1] where
> >the ontology comes in or what kind of reasoning is being performed,
> >but I am guessing that you are not using a very large or complex
> >ontology.
> >
> >>
> >>  >  I don't find
> >>  > this argument very convincing.
> >>
> >>  As I say, I didn't make that argument.
> >>
> >>  I'm arguing that we can advance the state of the art
> >>  without a completeness requirement.
> >>
> >>  > Even w.r.t. ontology level reasoning I expect things to rapidly get
> >>  > large and complex enough that humans wont be able to check all
> >>  > inferences - we will just have to trust that the reasoner got it
> >>  > right. Soundness is therefore essential, and completeness highly
> >>  > desirable.
> >>
> >>  Yes, soundness is essential.
> >>
> >>  I don't see why completeness is all that interesting
> >>  in the general case. I expect various reasoners
> >>  to be complete for various classes of problems.
> >
> >This is one of the problems with incompleteness - it is notoriously
> >difficult to characterise "classes of problem" for which such a
> >reasoner is complete. See [1] for a discussion of this issue.
> >
> >>  > For example, when multiple processes are interacting, some
> >>  > action may be taken by one process on the basis of a non-inference by
> >>  > another process,
> >>
> >>  That's non-monotonic reasoning. Part of life in the semantic
> >>  web is: don't do that (without explicit license).
> >
> >I don't see that this is non-monotonic. I'm not even talking about
> >changing any facts. I'm talking about the problems that can arise when
> >a "negation" is inserted by a process that uses the result of a query
> >to another process.
> 
> If that query was not made with an explicit reference to a closed 
> world assumption as part of the query, then the assumption of the 
> truth of the negation from the failure of the query is a nonmonotonic 
> inference step.
> 
> >E.g., a missile defense system might be programmed
> >to fire at any incoming aircraft not identified as a friendly.
> 
> Good example of a nonmonotonic inference which illustrates its 
> dangers. There is a recent true-life example, by the way, involving 
> an unexpected default assumption (see end of this message).

I agree that the system as described is inherently nonmon, but the
problem in this particular case is nothing to do with non-monotonicity
as I am not considering what may happen when new information is
added. In this case I am assuming that the information we have is
already sufficient to determine that the missile should NOT be
fired. If all the reasoners are sound AND complete, then it wont be
fired. If they are incomplete, then it may be fired.

Obviously this kind of inference is "dangerous", but it is likely that
such systems will/are-being used. It is probably difficult/impossible
to definitively identify an aircraft as an enemy, and that the only
thing you can do is check that it isn't a known friendly. It would be
nice to at least guarantee that such checking is as thorough as
possible, i.e., that if the information available at any given moment
supports the inference that an aircraft is a friendly, then it will
definitely be identified as such. 

One could imagine many other applications where this kind of reasoning
would be important. E.g., when screening for some medical condition
it is likely that a person is deemed not to have the condition just in
case that you fail to find evidence that they do have it. It would be
nice to be able to guarantee that you will never send someone home
without treatment/further tests when the evidence from the screening
is sufficient to infer that they have the condition.

> >Firing
> >at a friendly aircraft due incompleteness in the identification
> >process might reasonably be considered as unsound behaviour on the
> >part of the overall system.
> >
> >>  > so incompleteness can easily lead to "unsoundness".
> >>
> >>  Unsoundness can result from all sorts of bugs; this
> >>  is just one of them.
> >>
> >>  Actually, unsound/heuristic reasoning can be pretty interesting,
> >  > as long as it's not confused with formal reasoning; e.g.
> >>
> >>	I conclude based on your recent buying patterns
> >>	that the following products are likely to be
> >>	interesting to you: X, Y, Z.
> >>
> >>	I didn't arrive at this conclusion based on
> >>	sound reasoning, so take the recommendations
> >>	with a grain of salt.
> >>
> >>  or
> >>
> >>	I conclude, based on a search of my extensive
> >>	holdings, that there are no court cases
> >>	in that jurisdiction involving chimpanzees and volkswagens.
> >>
> >>	Digitally signed,
> >>	The BigLaw online service.
> >
> >This might be true, but I fail to see the relevance. The question I am
> >addressing is, should we design the language in such a way that it is
> >possible to build sound and complete reasoners for usin in
> >applications where this is an important issue.
> 
> No, thats not the question, because that is trivial: there have been 
> sound and complete reasoners for full first-order logic available for 
> the last 30-odd years.

But not terminating. I am suggesting that we design the language so
that it is possible to build sound, complete AND TERMINATING reasoners
for applications where this is an important issue.

> Some of them are even quite efficient, these 
> days, on almost all inputs.

Sorry Pat, but this is nonsense. They can handle some kinds of problem
quite well, but they are brittle and prone to fail (i.e., not
terminate) unpredictably. They are also known to perform very poorly
with some of the constructs we have in FOWL, e.g., cardinality
restrictions. Moreover, the efficiency of state of the art systems
often depends on the setting of very large numbers of parameters
controlling the choice of heuristics etc. The setting of these
parameters is well known to be something of a black art only
understood by system designers (and not even very well understood by
them - in a recent experiment the Vampire system was able to solve
some previously unsolved problems using randomly selected settings
that would never have been considered by the system designers).

Regards, Ian

> 
> Pat
> 
> PS. the example mentioned is described here:
> Washington Post  March 24, 2002 Pg. 21
> 
> >'Friendly Fire' Deaths Traced To Dead Battery
> Taliban Targeted, but U.S. Forces Killed
> 
> >By Vernon Loeb, Washington Post Staff Writer
> 
> 
> >The deadliest "friendly fire" incident of the war in Afghanistan was
> >triggered in December by the simple act of a U.S. Special Forces air
> >controller changing the battery on a Global Positioning System device he was
> >using to target a Taliban outpost north of Kandahar, a senior defense
> >official said yesterday.
> >Three Special Forces soldiers were killed and 20 were injured when a
> >2,000-pound, satellite-guided bomb landed, not on the Taliban outpost, but
> >on a battalion command post occupied by American forces and a group of
> >Afghan allies, including Hamid Karzai, now the interim prime minister.
> >The U.S. Central Command, which runs the Afghan war, has never explained how
> >the coordinates got mixed up or who was responsible for relaying the U.S.
> >position to a B-52 bomber, which fired a Joint Direct Attack Munition (JDAM)
> >at the Americans.
> >But the senior defense official explained yesterday that the Air Force
> >combat controller was using a Precision Lightweight GPS Receiver, known to
> >soldiers as a "plugger," to calculate the Taliban's coordinates for a B-52
> >attack. The controller did not realize that after he changed the device's
> >battery, the machine was programmed to automatically come back on displaying
> >coordinates for its own location, the official said. Minutes before 
> >the fatal B-52 strike, which also killed five Afghan
> >opposition soldiers and injured 18 others, the controller had used the GPS
> >receiver to calculate the latitude and longitude of the Taliban position in
> >minutes and seconds for an airstrike by a Navy F/A-18, the official said.
> >Then, with the B-52 approaching the target, the air controller did a second
> >calculation in "degree decimals" required by the bomber crew. The controller
> >had performed the calculation and recorded the position, the official said,
> >when the receiver battery died.
> >Without realizing the machine was programmed to come back on showing the
> >coordinates of its own location, the controller mistakenly called in the
> >American position to the B-52. The JDAM landed with devastating precision.
> >The official said he did not know how the Air Force would treat the incident
> >and whether disciplinary action would be taken. But the official, a combat
> >veteran, said he considered the incident "an understandable mistake under
> >the stress of operations."
> >"I don't think they've made any judgments yet, but the way I would react to
> >something like that -- it is not a flagrant error, a violation of a
> >procedure," the official said. "Stuff like that, truth be known, happens to
> >all of us every day -- it's just that the stakes in battle are so enormously
> >high."
> >Nonetheless, the official said the incident shows that the Air Force and
> >Army have a serious training problem that needs to be corrected. "We need to
> >know how our equipment works; when the battery is changed, it defaults to
> >his own location," the official said. "We've got to make sure our people
> >understand this."
> 
> That last is a wonderful example of bad system thinking, by the way.
> 
> 
> -- 
> ---------------------------------------------------------------------
> IHMC					(850)434 8903   home
> 40 South Alcaniz St.			(850)202 4416   office
> Pensacola,  FL 32501			(850)202 4440   fax
> phayes@ai.uwf.edu 
> http://www.coginst.uwf.edu/~phayes
> 

Received on Tuesday, 23 April 2002 08:11:18 UTC