Re: comments on current version BLD document: symbols, datatypes, semantics

> OK.  So, the only remaining open issue is the question whether the set
> of support for datatypes is fixed or not, which is discussed in the
> thread following [1].
> 
> [1] http://lists.w3.org/Archives/Public/public-rif-wg/2007Oct/0063.html
> 
> 
> Just one more remark:

Could not resist an urge to reply, below :-)


> >>> The standard way is to first define an alphabet (which includes all the
> >>> symbols, connectives, etc.) and then define the rules for putting the
> >>> alphabet symbols together into formulas.  This is not explicitly mentioned
> >>> -- an omission. It is mentioned now. In fact, there was a bad typo, which
> >>> said "The language of RIF ..." while it should have been "The alphabet of
> >>> RIF ...".
> >> As I understand, the standard way is to let the alphabet vary; the user
> >> can choose an alphabet, and the logic defines which formulas can be
> >> obtained from this alphabet and the logical connectives (i.e. the
> >> language). in my example above, the alphabet A is chosen by the user,
> >> and LA is the language obtained from A and the logical connectives and
> >> syntax formation rules in logic.
> > 
> > Nope. When you define a logic, you would normally say that you have a set
> > Const, Var, etc., without giving out many details.
> > 
> > But when you are specifying a ***concrete language*** then you must state
> > what your alphabet is, and it is fixed. An analogy is to say that Java
> > should not have a fixed alphabet and each user should be able to decide
> > which sequences of characters are to be allowed as variables, integers, etc.
> > 
> > We are defining a concrete language, not just a logic.
> 
> I would argue that the RDF and OWL are concrete languages.  In both
> languages, the alphabet is not fixed; the symbols simply have to be of a
> specific shape (e.g. URI or literal).
> However, as I said, I gave up my resistance to fixing the alphabet :-)

I claim that OWL alphabet does include all symbols in the lexical spaces of
its data types, since one can write any literal of the right "shape", and
it is supposed to be accepted by OWL.

Now, if you look at their semantics, they define it with respect to an
unspecified "vocabulary", which is probably the basis for your claim.  In
the grand schema of things it really does not matter because one could
always say it is a matter of style.  But I would argue that this was
actually a mistake.

An interpretation is supposed to be a structure where one should be able to
interpret any statement, S, in a given concrete language (OWL). But the way
it is defined in OWL (http://www.w3.org/TR/owl-semantics/direct.html), if S
contains a literal that is not in the vocabulary of one particular
interpretation, I, then S has no meaning with respect to I.

The normal way to define things is to say that a logic language has an
alphabet (vocabulary + a small set of special symbols, like parentheses)
and then go from there. By this measure, OWL does not define a language but
rather a group of languages (I am not talking about Lite vs DL, etc. -- OWL
DL can be said to be a group of languages according to this view). Given
two OWL statements, S1 & S2, there might be an OWL language which
contains S1 but not S2; there is one that contains S2 but not S1, and there
is one that contains both.

Another analogy is to view C as a group of languages that vary in their
alphabets.

As I said, this all can be seen as a matter of style, but I'd rather stick
with the standard way of doing things in logic and CS. In any case, your
claims that this is "highly undesirable" is a slight exaggeration :-)


	--michael  

Received on Friday, 19 October 2007 21:56:34 UTC