W3C home > Mailing lists > Public > public-rif-wg@w3.org > January 2007

Re: types of conformance

From: Bijan Parsia <bparsia@cs.man.ac.uk>
Date: Thu, 4 Jan 2007 00:35:23 +0000
Message-Id: <CA1B2C95-A232-4525-B152-BE4EADD6AF2B@cs.man.ac.uk>
Cc: W3C RIF WG <public-rif-wg@w3.org>
To: edbark@nist.gov

On Jan 3, 2007, at 6:23 PM, Ed Barkmeyer wrote:
[snip]

> This is not at all what I meant.  My expectation was that we would  
> define specific sets of features as well-defined "sublanguages" of  
> RIF that have known well-defined semantics.  E.g., a "core RIF",  
> and "core + features X and Y" and "core + feature Z"  and  
> "everything" = core + W, X, Y, and Z.  A tool can conform to one or  
> more of the standard sublanguages, but not to an arbitrary  
> collection of RIF features.

Do you mean that a tool can "implement" such a sublanguage (in  
Michael's terms)?

Presumably, a document could "conform" to such simply by using those  
features. You mean we should call out specific combinations as "key"?

> This is related to what Christian said:
>> We also discussed avoiding trviial compliance: is requiring that all
>> compliant implementations list the feature they "opted out" enough  
>> for
>> that purpose, or do we require that the specification of a dialect  
>> lists
>> a limited number of optional features?
>> Wouldn't that bring us back to the discussion of what is the  
>> bottom line?
>
> The "bottom line" is what I think of as the "core" sublanguage --  
> minimum compliance.  If we have just the "core" and 12  
> independently optional features, we will have in fact defined 2^12  
> distinct sublanguages, and in all likelihood they will have no  
> consistently definable semantics.

I don't understand this. In most cases, if core + all 12 options  
forms a coherent language, which doesn't seem hard to do, then all  
the subsets are coherent as well. (Assuming some sane notion of  
"feature" of course :))

>   So I reject this approach.

I see many reasons why one might reject this approach, but the  
likihood of a lack of "consistency definable semantics" eludes me  
still. Clarify?

> If we have just 4 independently optional features, we will have 16  
> distinct sublanguages, and even that is probably too many.  I would  
> prefer that we create a kind of partial ordering among the  
> sublanguages, in which any ordered sequence has a consistent  
> semantics, while incomparable branches needn't.

Lostville here. Mixing model theoretic and e.g., operational  
semantics is, of course, tricky (at best) but I thought they were all  
going to have model theoretic semantics.

And aren't we, in core, talking about bog standard features? Consider  
several things mentions: pure relational, with non-recursive rules,  
with recursive rules, and each of those with or without function  
symbols. What's the problem *semantically*?

[snip]
> The greater problem is in trying to define the behaviours of a  
> conforming tool.  We think of a conforming tool as a reasoning  
> engine that implements a useful algorithm that covers and is  
> consistent with the standard semantics for a specified  
> sublanguage.  But what of a tool that simply reads a conforming  
> document and outputs the ruleset in the Jena form?

Document conformance seems to be enough for that, IMHO.

>   Similarly, what of a tool that has some user-friendly browser- 
> based interface that deals with ontology matching or XML message  
> equivalence and spits out transformation rules in RIF format?

I don't know what this is. But document conformance seems all we need  
to say there too.

> These things aren't reasoning engines, but surely they are in some  
> sense conforming.

Er...I'd strongly suggest being *very restrictive* in the sorts of  
tool behavior you strive to regulate. OWL has species validation and  
consistency checking and I don't think you really needed more.  
Document conformance is grand. Marking that a reasoner is sound,  
complete and/or, if possible, a decision procedure is also grand. If  
not implementing a decision procedure, it could be helpful to define  
certain equivalence classes for effective computation (see <http:// 
www.cs.mu.oz.au/research/mercury/information/doc-release/mercury_ref/ 
Semantics.html#Semantics> for an example of doing this).

I can see wanting to go even further and say something about the set  
of answers (e.g.) under certain general resource bounds, but that is  
an even trickier area to get into so I'd hold off.

> That is, the ability to perform reasoning from a RIF ruleset is a  
> "feature", and one I might add that we may find it difficult to  
> define.

Er...I'd be minimal and do the standard thing. Dialect detection and  
entailment/answers/reasoning seem more than enough. (I.e., (document)  
conformance and implements in Michael's terms; producing and  
consuming (for reasoning) in my terms).

Those seem useful enough and done right not too burdensome. These  
don't really distinguish between mix and match and named points, afaict.

Cheers,
Bijan.
Received on Thursday, 4 January 2007 00:35:50 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 2 June 2009 18:33:35 GMT