W3C home > Mailing lists > Public > w3c-rdfcore-wg@w3.org > August 2003

Re: xmlsch-02

From: Graham Klyne <gk@ninebynine.org>
Date: Sun, 31 Aug 2003 09:23:43 +0100
Message-Id: <>
To: Brian McBride <bwm@hplb.hpl.hp.com>
Cc: Patrick.Stickler@nokia.com, w3c-rdfcore-wg@w3.org, "Peter F. Patel-Schneider" <pfps@research.bell-labs.com>

Hmmm...  Brian, I take your point.

The forms in question are clearly *syntactically* well-formed RDF.  (I 
think that trying to limit the valid syntax for lexical forms based on the 
datatype is a nightmare path to follow.)

If I were to follow my own approach (of not specifying behaviour for input 
that is in some sense ill-formed) then I'd argue that the denotation of 
"foo"^^some:datatype is undefined if "foo" is not in the lexical space of 
some:datatype.  That is, there are no dependable entailments or 
non-entailments involving such forms.  But this stops short of saying that 
RDF applications based on XML parsers such as Xerxes are wrong.  But when 
such parsers are used, if the lexical forms are datatype-consistent then 
the results are interoperable.

Thus, any RDF that contains such mismatched lexical forms cannot be used 
dependably in any inference process.  I think this is less problematic that 
making normative statements of the form MAY or SHOULD [NOT] (which I see as 
the "fudge" in the current proposal), since such statements effectively try 
to define how such forms are treated, and that leads us into the 
interoperablility problems that have been noted.  (By comparison, the 
approach I suggest is clear:  typed literals whose lexical form is not in 
the lexical space of the datatype are not dependably interoperable.)

The IETF has a famous mantra, attributed to Jon Postel:  be liberal in what 
you accept and conservative in what you send.  As an overriding principle, 
it's not without problems, but as a guide to matching the requirements of 
strict interoperability with the realities of implementation variations it 
has some value.  In this case, defining semantics in absence of a 
processing model, it's not directly applicable;  but I think that 
explicitly recognizing corner cases that may be treated differently by 
different applications, and mainly by being sufficiently clear about the 
core language that MUST NOT be susceptible to such variability, is to 
proceed in similar spirit.


At 09:50 29/08/03 +0100, Brian McBride wrote:
>Hi Graham,
>Its good that you are able to keep at least partialy in touch with things.
>Graham Klyne wrote:
>>While (reasonably, IMO) staying silent about what applications should do 
>>if faced with text that is not well-formed RDF.
>What I write here is not advocacy, but I thought it might be helpful to 
>pass on something I learned in a recent conversation with pfps.
>A point made by pfps is that the object of the triple represented in:
>   _:a eg:prop "ten"^^xsd:integer .
>is well formed RDF according to the specs.
>   "ten"^^xsd:integer
>does not denote a Literal, but it does denote something that is not a 
>literal (if I've understood semantics properly).  Similarly,
>   " 10 "^^xsd:integer
>does not denote a literal according to the current specs (given an 
>xsd:integer aware interpretation).
>Peter's point to me was that if it were an error, he would have no problem 
>with suggesting automatic correction, but he does have a 
>problem  correcting something that is not an error.
>By analogy, imagine you are a comms driver and you get data with a hamming 
>code for error correction.  If you get data with that fails parity, its ok 
>to correct it, but if you get data that passes the integrity check, its 
>not ok to say you don't like it, and correct it anyway.

Graham Klyne
Received on Sunday, 31 August 2003 10:35:37 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:54:07 UTC