- From: Sandro Hawke <sandro@w3.org>
- Date: Sun, 17 Aug 2003 23:44:02 -0400
- To: "Peter F. Patel-Schneider" <pfps@research.bell-labs.com>
- cc: www-archive@w3.org
> PS: Note that this [1] can be used to defeat just about any scheme for special
> syntactic processing of XML literals in RDF/XML.
Jeremy's "Option 3" [2] does a purely syntactic treatment, including
xml:lang, where XML Literals look like perfectly normal
(string^^datatype) pairs by the time they get to N-Triples. That
wouldn't get tripped up here, as far as I can tell.
More deeply, do you have a simple explanation of why it doesn't work
to have two kinds of datatypes -- language sensitive and language
insensitive ones? I'm not sure the simplest way to arrange it, but
maybe:
- the lexical space of each datatype is either a set of Unicode
strings (exclusive) OR a set of pairs of <Unicode string,
language string>. (That is, the range of L is the union of the
set of Unicode strings and the set of string/string pairs. The
domain of L2V(xsd:int) is the set of Unicode strings like "0",
"1", etc. The domain of L2V(rdf:XMLLiteral) is the set of pairs
of Unicode strings like <"<a></a>", "en-US">, <"<b></b>,
"en-US">, etc.
or
- the lexical space of each datatype is a pair (as above); for
many datatypes the second item in the pair does not play a
role in L2V(d); for all x,y,z: L2V(xsd:int)(x,y)=L2V(xsd:int)(x,z)
These are certainly more complicated, and perhaps offensively so, but
they hardly seem impossible or even impractical. I'm no expert in
this kind of abstraction; am I missing something important?
-- sandro
[1] http://lists.w3.org/Archives/Public/www-webont-wg/2003Aug/0084
[2] http://lists.w3.org/Archives/Public/w3c-rdfcore-wg/2003May/0016
Received on Sunday, 17 August 2003 23:44:10 UTC