Re: completeness

I think this isn't a responsive answer, but perhaps the question  
isn't clearly enough stated.

One important point of having a standard is to support as much  
interoperability as possible, reasonable, and helpful. It's  
particularly important that the standard be clear on what's required  
so that if a system deviates, customers and competitors have a  
reasonably objective basis for discussion. I refer everyone to the  
excellent and amusing essay "Why specs matter?" by Mark Pilgrim:
	http://diveintomark.org/archives/2004/08/16/specs

This is why grammars are generally preferred to a pile of prose and  
why, at the W3C, formal methods have gotten a lot of purchase (e.g.,  
there are formal semantics for XQuery and XPath, for example). (HTML5  
is an interesting non-formal method based spec, but there the spec is  
given in terms of canonical algorithms exhaustively reversed  
engineered from existing browser behavior. Also, each particular  
algorithm admits of relatively few necessary degrees of freedom,  
unlike, say, for a query language.) Yes there will be bugs throughout  
any system at all levels, but no one claims or believes otherwise: No  
formality or exhaustiveness of the spec will change this fact. But  
just considering the experience of RDF model & syntax should make us  
all very interested in getting as precise a spec as we possibly can.

In this case, there have been claims of trading completeness for  
scalability. Let's grant that this is what has been done for the sake  
of argument. After all, with a bit of work, one can make e.g., a DL  
Lite implementation incomplete (but scalable) for OWL DL (you have to  
explain how you ignore the non-DL Lite bits, of course). But in this  
case, the incompleteness is very clearly specified and specified in a  
neutral manner with regard to implementation. (Which is another  
important point: The W3C, at least some of the time, tries to be fair  
between big vendors and small vendors and everyone else, although you  
can read complaints about it being in the thrall of the big vendors.  
Requiring people to reverse engineer *anyone's* in order to be  
compatible defeats the point of specification.)

Finally, it's easy to be (formally) incomplete and *not* scalable or  
not robustly scalable or not easily robustly scalable. After all,  
this is basically what happened with OWL Lite. For OWL Lite, to  
program reasonably interoperable systems *independently* pretty much  
requires implementing SHIF which is not easy to make scalable. Of  
course, I could through out parts of SHIF and you could through out  
parts of SHIF but unless we through out the *same bits* we won't be  
reasonable interoperable and users will have difficulty assessing the  
systems.

Note, I'm not saying that having formal properties shown provides  
total certainty. But I think it's reasonable for the working group to  
ask for more than mere assertion. And people *do* spend a lot of time  
and money on interop (e.g., WS-I, RDF Core, HTML5 etc. etc.).

OWL processors are *components* of other systems, including web apps.  
So it's a bit apples and oranges too point out that people don't make  
completeness or correctness demands on web apps. They do on  
databases. (And XSLT processors...imagine if saxon gave a *different*  
result than libxslt!) While we can't get perfection, we should strive  
to do well.

Cheers,
Bijan.

Received on Thursday, 21 February 2008 17:30:12 UTC