John Goodwin wrote:
I don't imagine that you'd want to do something like

FILTER(withinDistance(?g1, ?g2, 3.2))

with triples. You could write it as

[] a :PairwiseDistance ;
    :point ?g1 ;
    :point ?g2 ;
    :distance ?d ;
FILTER(?d < 3.2)

but it seems a bit verbose, and if you try to materialise the graph it
will be massive.

Agreed - I had imagined that distance calculations/queries would be done
using a FILTER rather than as triples. 
Our system handles queries of this form like this:

?thing1 :where ?g1 .
?thing2 :where ?g2 ;
    :part [
       a Buffer;
       center ?g1;
       radius "3.2"

The query then defines a buffer of distance 3.2 that surrounds one point and asks if the other is within the buffer.  While this is admittedly 4 triples versus one filter function call, it has the previously stated advantages of not extending SPARQL processors, and having an RDF representation for the functions.  This last part has several nice properties:

- You can transfer the data from system to system. 
- If you represent it this way, a CONSTRUCT query can be asked that mirrors the query graph; the spatial context of how you got to the answer is not stored only in the query.
- You can write out the content of a KB in a standard RDF graph.

Indeed materializing the graph for a variable radius would be huge; which is why we wouldn't do that.

While I don't disagree with Mike in that this is now an engineering problem, I disagree with the assertion that it is *just* requesting some new datatypes and functions.  There are, at least in my mind, some open questions in how to support some of the things that spatial RDBMS's can do now.  Consider k-Nearest Neighbor queries as an example.  "Show me the five football stadiums nearest to Washington, DC" is a perfectly valid query.  However, since entities can have multiple types in a SemWeb based kb, it is more challenging to effectively use the spatial index for this query.