model theory of error

Last Tuesday we were unable to resolve the issue of builtin functions
because we could not decide on their semantics. The problem is that builtin
functions are partial and are supposed to return an error if given wrong
arguments.

One approach we considered was to simulate them using predicates by
translating functional expressions into formulas that involve builtin
predicates. Unfortunately, this has a problem that occurrences of such
functions in the conclusions of rules must have a different translation
than the occurrences in rule premises. This was perceived as a problem if
we were to consider BLD as a subset of the (future) first-order dialect,
since the first-order dialect will not have such a distinction.

I thought a bit about a direct model-theoretic semantics, and there is at
least one solution that does not seem too complicated. Comments are welcome.

1. We need a special constant (not sure if it should be in rif:IRI or
   in its own symbol space) of the form rif:error.

2. The domain of interpretation of any semantic structure will also have a
   special distinguished element called _|_.

3. All semantic structures will be required to interpret rif:error as _|_
   (i.e., I_const(rif:error) = _|_)

4. The builtin functions will be defined so that they will return rif:error
   whenever their arguments are of the wrong type.

5. For predicates, we have two options.
   a. The simplest option is to say that a predicate, p(a,b,c,...), is false if
      any of its arguments evaluates to _|_ in the interpretation.

   b. This option introduces a new truth value, called E (error) such that 
      ~E = E, E/\F=F, E/\T=E, E\/F=E, E\/T=T. Then we can say that
      p(a,b,c,...) has truth value E if at least of of the args is _|_.

  The advantage of option (b) over option (a) is that we have an explicit
  representation for errors. 
  The disadvantage is that it is much more complicated. Many results need
  to be ported to account for this new truth value, and I did not check
  this carefully. Quite possible that this idea breaks somewhere.

  I think option (b) is too much work for very little gain.


Comments? (esp. if anyone can see whether option (a) breaks somewhere)


	--michael  

Received on Thursday, 20 December 2007 20:46:51 UTC