RE: Proposed issue: What does using an URI require of me and my s oftware?

>  > -----Original Message-----
>>  From: Bijan Parsia [mailto:bparsia@isr.umd.edu]
>>  Sent: Friday, October 03, 2003 10:09 AM
>>  To: Tim Berners-Lee
>>  Cc: public-sw-meaning@w3.org
>>  Subject: Re: Proposed issue: What does using an URI require
>>  of me and my
>>  software?
>>
>>
>>
>>  Sorry for the delay in replying.
>>
>>  On Friday, September 26, 2003, at 10:42  AM, Tim Berners-Lee wrote:
>>
>>  >
>>  > In-Reply-to Bijan's original  
>>  > <http://lists.w3.org/Archives/Public/public-sw-meaning/2003Sep/
>>  > 0054.html> , clarifications.
>>  >
>>  > - Use of an HTTP URI as a symbol in an RDF statement
>>  >  refers to one thing which the URI owner intended.
>>
>>  So, this is broken out of the gate, right? I mean, why
>>  "intended"? Did 
>>  you mean, "What the current URI owner current intends"? And what if 
>>  they intended it to refer to more than one thing? Why is the
>>  one thing 
>>  important, anyway. (I tend to strongly agree with Pat on
>>  this. And to 
>>  go further that I'd rather that the use of a URI mean what *I*, the 
>>  document author, intended it to mean.)
>>
>>  > - The URI owner puts true, consistent, hum &/or machine
>>  > readable information in the
>>  >   document that you get should you chose to dereference the URI.
>>
>>  And this seems compatible with "And I assert other things about the 
>>  'one thing' that may or may not be consistent with what the
>>  URI owner, 
>>  fool that they may be, sez about it. And I'm taking my 'may' rights 
>>  very seriously and not going to chose to dereference that URI."
>>
>
>I'm curious about exactly what each of you, or anyone else, envisions as the
>problem scenarios here. Is it
>
>1. An owner, A,  defines a URI to have a meaning

That isn't a well-enough-defined notion for this 
to be an actual scenario. Lets agree to talk in 
terms of things that really do have a crisp 
operational meaning, like publishing an ontology.

A publishes an ontology AO which uses a URI owned by A.....

>  which some other author, B,
>redefines or misinterprets to have a different meaning in his document,

.... and B publishes another ontology BO which 
uses A's URI ...to do what, exactly?

1. TO say something that could not be inferred 
from AO alone? But that is going to happen all 
the time: when I order a book, I say something 
about the book that the bookstore didnt say, viz. 
that I ordered it.

2. OR, did B say something in BO that A feels 
does not reflect A's intentions when A published 
AO? Well, that's an interesting case, but surely 
we cannot expect B to be telepathic; B could 
argue that A should have made A's intentions more 
explicit when writing AO. In any case, this seems 
clearly to require that A and B communicate in 
ways that go outside the SW framework, so I 
suggest that we just don't say much about this 
other than maybe acknowledge that it could 
possibly happen.

3. OR, did BO actually contradict AO (possibly, 
when taken in conjunction with some background 
assumptions mutually agreed by A and B, or by the 
general assumptions of the culture, or whatever, 
to be correct) ? Now, I think, we have a case 
where we can get down to some details. If B 
publishes something that contradicts what A 
publishes, and if it uses A's vocabulary, then A 
should feel entitled to claim that if C draws 
some conclusions from BO, then C is 
misunderstanding the intended meaning of A's URIs 
in a way that might be called SW-egregious, and 
we could reasonably require that such uses are 
naughty. And there is some wriggle room here to 
talk about assumed shared background assumptions, 
and so on, which might in turn give some real 
bite to this word "social":  for example, any 
imported ontologies would of course be required 
to be relevant to the contradiction-detection 
issue.

>and
>then a consumer, C, reading B's document mistakenly assumes that A's meaning
>is intended.

Draws a conclusion (validly, using extant SW 
semantic specifications) which A does not 
like.... or didn't think of .... or something (?)

>
>2. An owner, A, defines a URI to have some meaning, and then B takes some
>liberty with the use of the URI; maybe something along the lines of the use
>of the word 'infinite' in the phrase 'infinite wisdom'. A mathemetician
>might argue that this is an improper use of the word, but most of us don't
>see it as a problem.

I agree, but a liberty on the SW might be a 
rather different category from a liberty in 
English metaphors or poesy.

>
>The reason I digress to these somewhat simple examples is that it seems to
>me that once the tools evolve, these judgement calls will be made not by
>computer scientists, but by businessmen, supervisors, etc. I suppose we
>could build some kind of semantic consistency checker into the tools; MS
>Word RDF/OWL Check?
>
>Naively curious,
>James

Not naive at all, right on the button. Like, what 
problem are we setting out to solve here? What 
might go wrong that our declarations of Policy 
and Correct Architecture and so on are aiming to 
prevent? I for one am completely unclear what the 
issues are supposed to be that so concern us 
here, and I am extremely worried that we will 
make declarations based on mistaken ideas about 
meaning rather than on any actual problems.

Rather than quarrel over the meaning of words 
that have no exact meaning (like "meaning" for a 
start) or that some of us think have exact 
meanings but others think are meaningless (like 
"resource"), why don't we try to get a bit more 
precise about why we feel that something - 
ANYTHING - needs to be said about this issue.  If 
nobody can point to anything that is likely to 
break if we say nothing, then the best thing to 
do is to agree to say nothing.  And if they can, 
then at least we will have some example scenarios 
to help us focus discussion.

Right now, the only machine-detectable symptom 
that something is wrong seems to be that an SW 
reasoning engine might detect a contradiction, 
perhaps using information which comes from 
non-SW-ontology sources, perhaps using many other 
kinds of background assumptions such as widely 
used standard ontologies, whatever: but somehow a 
contradiction is detected by a piece of software. 
That is definitely a sign that something is 
screwed up somewhere, or that two sources of SW 
content disagree with one another. So maybe we 
could restrict the discussion to the question: 
what should an SW agent do when it finds a 
contradiction? What protocols or guidelines can 
we suggest for how to handle that situation? 
Because as long as none of them do find any 
contradictions, I think the SW will just kind of 
work by itself, and what we say about "meanings" 
will have about as much relevance to the actual 
operation of the SW as farting.

Pat Hayes



-- 
---------------------------------------------------------------------
IHMC	(850)434 8903 or (650)494 3973   home
40 South Alcaniz St.	(850)202 4416   office
Pensacola			(850)202 4440   fax
FL 32501			(850)291 0667    cell
phayes@ihmc.us       http://www.ihmc.us/users/phayes

Received on Friday, 3 October 2003 13:51:03 UTC