Re: rdf-related built-ins (and a few others)

Sandro Hawke wrote:
> Dave Reynolds <der@hplb.hpl.hp.com> writes:
>> ** RDF manipulation
>>
>> We intend to include a means for representing RDF data in Core based on 
>> the frame syntax and some additional datatypes. We will need a 
>> corresponding set of builtin functions.
>>
>> I suggest a minimum would be to support the SPARQL [2] functions:
>>     isIRI
>>     isBlank
>>     isLiteral
>>     lang
>>     datatype
>>     langMatches
>>     str
>>
>> Plus we would need constructors for our mapped version of the RDF 
>> datatypes - iri, blankNode, plain-literal-with-lang-tag.
>> We need to complete the RDF embedding proposal before we can properly 
>> define this latter group.
> 
> I agree we need the complete RDF embedding proposal before we can
> properly define this latter group, but let me raise a basic concern with
> this group.   As I understand it, these functions operate on the 
> syntactic structures of RDF graphs, rather than on the knowledge
> expressed about the domain of discourse.  

True but surely that's what a lot of the builtins do (e.g. the ones for 
exploding dateTime values)?

> They're a bit like prolog's
> var/1 -- very useful, but not really a part of the logic.

Disagree. Var is not part of the logic because it exposes the 
operational semantics. That is like SPARQL's "bound" which I left off 
the list.

Whereas all the above operations are declarative.

> To illustrate the difference, as I understand it, one might have a
> funtion like "odd" which takes an integer and tells you if it's odd or
> not.  That is, you could syntactically apply "odd" to a literal data
> value which denoted an integer, or to a bound variable, or another
> function term which returns an integer, or to a URI-term which denotes
> an integer, etc.  By contrast, the 'datatype' function doesn't "take" an
> integer; it operates more directly on a literal data value term or a
> variable bound to one.  The number one, itself, is odd -- but the number
> one does not have a datatype.  It can be expressed in several different
> literal data value terms, including "1"^^xsd:nonNegativeInteger,
> "1"^^xsd:int, and "1"^^xsd:integer.   

I don't fully follow the distinction.

Yes these operations are data-structure operations but so what.

Isn't it just as true to say that representing a person's name by a 
function:
     nameOf(personID, "Dave", "Reynolds")

or one of these proposed frame things:
     personID[firstName->"Dave", lastName->"Reynolds"]

is a data structure operation. We have to be able to construct and pull 
apart these data structures and the modelling choice is exposed to the 
rules even though it is not that significant in knowledge representation 
terms.

If we use any of the integer types below xsd:integer we are making 
computational rather than pure domain modelling choices, yet we have 
xsd:long in the core.

>>From a software engineering perspective, these functions seem to violate
> modularity, in a sense poking through the logical abstraction of the
> language.. 

Disagree. The existence of different types is explicit and exposed 
already, I don't see the loss of abstraction barrier.

I see this more as as a dynamic/weak typing v. static/strong typing 
issue. RIF has to be able to support dynamic typing to support RDF and 
many object models.

>> ** Output
>>
>> Gary has suggested [3] that a minimal feature to enable test cases to be 
>> written in RIF would be the ability to output a data value (the output 
>> then being used to indicated success of the test case). Thus we would need:
>>
>>      rif:print
>>
>> [3] http://www.w3.org/2005/rules/wg/wiki/Arch/Test_Cases
> 
> I wouldn't expect this to be in Core.  I'd expect it to be in an
> extension which the test cases use.

So the test cases for Core would not be expressible in Core?
Perhaps that's OK.

Dave
-- 
Hewlett-Packard Limited
Registered Office: Cain Road, Bracknell, Berks RG12 1HN
Registered No: 690597 England

Received on Friday, 22 June 2007 15:42:28 UTC