W3C home > Mailing lists > Public > www-rdf-interest@w3.org > October 2003

Re: AW: Datalog DEXPTIME Complexity + Safe Execution of Recursive Axioms + English

From: Andrei Voronkov <voronkov@cs.man.ac.uk>
Date: Tue, 7 Oct 2003 15:42:27 +0100
Message-ID: <16258.53459.222450.981074@rpc25.cs.man.ac.uk>
To: Adrian Walker <adrianw@snet.net>
Cc: "Enrico Franconi" <franconi@inf.unibz.it>, <www-rdf-logic@w3.org>, <www-rdf-interest@w3.org>, voronkov@cs.man.ac.uk


It is probably a discussion on a mailing list I am not a member os. What
Adrian writes is true. Just a couple of subtle notes.

> It's possible that an important point is overlooked when many of us say 
> things like "datalog can only compute things of polynomial complexity"

It is right to say that Datalog can only EXPRESS things of polynomial
complexity. "Compute" sounds too ambiguous.

> Now it would seem that, if one is proposing a rule interpreter, then its 
> inputs include variable datalog programs P.  So, the applicable complexity 
> would appear to be "complete in DEXPTIME".

The use of data complexity as the main complexity measure in database
theory is usually based on two assumptions: 

(1) The database may be huge, while queries (in this case datalog
programs) are small.

(2) The collection of queries is rather conservative and does not change
a lot. This means you can optimise each query as a fixed one (but the
database may change).

It is my personal opinion but it seems both assumptions are questionable
when non-relational data models are considered so the combined
complexity (data+program, usually equivalent to the program complexity)
may be more appropriate. 

Best, AV
Received on Tuesday, 7 October 2003 15:49:14 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:52:02 GMT