RE: What is truth anyways? was: [...]

> From: R.V.Guha [mailto:guha@guha.com] 
>  You have got to be kidding. Are you implying that with the SW we are 
> going to be able to replace all special purpose code, include those 
> running the banking networks, air-traffic control, nuclear missile 
> monitoring, etc. with general purpose inference engines?

I hope not - they work, they're stable, and they do things significantly
beyond the bounds of the layers of the SW that are presently under
consideration [unless you count some of Tim's and Sandro's comments about
creating a universal computing language as part of the SW, which I don't].

But that's not the intended thrust of my argument; I apologise if I've
communicated poorly and it has been taken as such.  If you look at how this
thread developed, what I was responding to was the following sequence:

-- Jonathan Borden --
Perhaps it actually would be
better to let everyone interpret triples as they please -- I mean N3/CWM
appears honestly useful, so why not allow
http://www.w3.org/2000/10/swap/log#Truth to be a _Truth_ in the same sense
that a truth defined by the RDF model theory is a truth (assertion) ? What
is the harm in being so draconian in how we define truth?

Isn't that how the internet works ... let a thousand flowers bloom ... and
so why not allow a thousand truths?
-- end --

-- Pat Hayes --
The issue isn't the notion of truth. The issue is whether this 
notion, whatever it is, is part of the RDF spec or not. If it is, 
then RDF engines should be required to respect it. If it isn't, then 
I don't give a damn what it is, because its irrelevant to what an RDF 
engine does.
[...]
OK, provided you agree that when the ATM talks to the bank and credit 
union computers, and those computers talk to the IRS computers, and 
they all use their own notions of truth, that you are happy with what 
happens to your bank account. And then of course there are the FBI 
computers and the NIMA computers....
-- end --

-- Jim Hendler --
Interesting Pat, so you're saying that when I stick my little plastic 
card into the Automated teller in Italy, and it hands me Euros 
charging an appropriate exchange rate against my machine in the US, 
that they are using a formal model theory to make it work -- can you 
show it to me??    err, perhaps sometimes you underestimate what can 
be done with "social agreements" instead of pure logic...
-- end --

-- Peter Crowther --
In this case, those 'social agreements' are specs of various banking
interchange formats plus places to download exchange rates and maps of card
number prefixes to issuers.  These are all written by humans, interpreted by
humans, and turned into (often buggy) special-purpose code and text files by
humans.
-- end --

I'd like to reduce the amount of special-purpose code that needs to be
written by humans.  I can't reduce it to zero, much as I would like to.  It
is easier for me to reduce it if I have tools at my disposal that are
capable of consistent communication, use and interpretation of whatever data
interchange format they use.  In the case of data that is communicated over
the semantic web, I want to know that the tools I have at my disposal talk
the same semantic web standards as the tools you have at your disposal to
create and consume that particular interchange format.  If there are
proprietary extensions, I want to know what they are so that I can avoid
them or treat them with caution.

'A thousand truths' falls into my category of 'proprietary extensions'; even
if I know about the thousand, I may not know about the thousand-and-first
that's buried in the content I just got.

'Social agreements' will, indeed, rule the semantic web.  They'll have to,
as there can't be any central authority for defining what terms correspond
to what URIs; and until the agreements start to bite and the semantic web
starts to standardise by use on particular identifiers to represent
particular terms, each one will also be a proprietary extension of the data
on the semantic web.  This is good, and the only way the web can grow.  But
I'd rather seed those proprietary extensions and social agreements by
sharing the highest possible level of common understanding: a logical
language rather than raw triples.

> Surely you don't mean that we can actually formally specify the 
> meaning of terms like "ATM" to ensure that the term means what you 
> intend it to, with the model theoretic tools we have today.

Correct.  I can't.  But I'd like to be able to use some appropriate
identifier (leaving aside, for the moment, the question of whether URIs are
appropriate identifiers) to represent the term "ATM"; and I'd like to be
able to describe as much as I can about it in a form that can be interpreted
in a consistent way by the largest possible set of reasoners out there.
There are two problems with this:

1) If we have a semantic web where everyone is free to interpret any triple
in any way they wish, my ability to describe my terms in such a way will be
limited.

We can leave the interpretation completely unconstrained, as the original
RDF spec did until RDFS came along.  Great.  So I can exchange RDF triples
with any other system, but I have no idea whatsoever whether the developer
of the system providing those triples intended them to be interpreted in the
way that I have chosen that my system interprets them; we have the whole XML
problem again, where pieces of RDF become standardised by human-readable
specifications such as Dublin Core and foaf.  At some level this happens
anyway, as we presently have no way of grounding the Semantic Web, but I'd
rather it happened at an appropriate level; I happen to think that level is
somewhat higher than raw triples.

2) If we have a semantic web where the specifications are vague or
underspecified, implementations may interpret triples in unpredictable ways
and my ability to describe my terms in such a way will also be limited.

So how do we constrain implementors' interpretation of the spec?  We can
attempt to describe constraints in formal or informal English; I think the
variety of behaviours of the current RDF toolkits demonstrates the problems
here.  We can attempt to describe constraints in a more formal language, for
example by attempting to provide an appropriate model-theoretic semantics; I
regard this as a way of reducing the chances of mis-communicating details of
the specification to its implementors, and I regard this as a Good Thing.

		- Peter

Received on Wednesday, 12 June 2002 11:50:22 UTC