Re: Dublin Core, the Primer and the Model Theory

At 12:44 AM 5/17/02 +0200, Jos De_Roo wrote:

>[...]
> > My view is that adopting a datatyping proposal that accommodates the ways
> > that application designers feel comfortable with will have a big effect on
> > RDF's eventual fate.  I have not personally found the arguments that lead
> > us to require tidy literal interpretations to be compelling.  That this
> > approach leads to characterizations of the Dublin Core approach as
> > "nonsense" is indicative (to me) that it's out of step with thinking of
> > application designers in the large.
> >
> > #g
> > --
> >
> > [1] http://www.coginst.uwf.edu/users/phayes/simpledatatype2.html
>
>
>I feel *very* concerned when reading
>
>1)[[
>Neither of these forms, by themselves, fixes the value of the
>literal. However, applications are of course free to use 'bare'
>literals, and to rely on string-matching to resolve questions of
>identity. Such use amounts to a decision to understand a bare
>literal as denoting its own label (and to understand rdfs:dlex
>as identity). It would be risky to rely on such a convention to
>perform extensive RDFS inferences, however, as this assumption
>can be overridden by other datatyping information, in general,
>so any inferences based on this assumption would need to be
>re-checked and perhaps revised if datatype information were
>added to the RDFS graph. Applications that do not make extensive
>inferences about identity should function in this way without
>meeting serious problems.
>]]
>
>2)[[
>BTW, this assumes untidy literal nodes. With a few deft tweaks
>to the MT we could manage with tidy literals, in fact: but if they
>were ever allowed to be subjects of triples, that would completely
>kill the tweaks and we would have to allow untidy literals again,
>so I wonder if it is worth it.
>]]
>
>and I remain confident with the current MT

I share your concern (2), and would not advocate trying to fit the scheme 
to tidy literal nodes.

Regarding (1), I think this reflects the "looseness" with which information 
designers may start out defining their data.  I don't support the idea of 
adopting a risky convention for performing general inferences (though 
specific applications may use additional "implicit" knowledge of their 
specific vocabularies).  The advantage of this approach that I see is that 
it's often possible to add additional information (as RDF statements) so 
that the desired inferences can be be drawn by a generic reasoner.

Finally, I don't lack confidence in the current MT - I believe it is sound 
and "does what it says on the box", but rather find myself in agreement 
with Jeremy that it doesn't as closely match much current use of RDF as I 
would have hoped.

#g


-------------------
Graham Klyne
<GK@NineByNine.org>

Received on Friday, 17 May 2002 04:58:16 UTC