W3C home > Mailing lists > Public > semantic-web@w3.org > August 2020

Re: Don't you use GeoSPARQL? [ was: Do you use GeoSPARQL?]

From: Frans Knibbe <fjknibbe@gmail.com>
Date: Sat, 1 Aug 2020 22:36:13 +0200
Message-ID: <CADh4F1SWVfefagbK8eSMt-rdwdoQaqvhYVo1TSL9BgsiOs2N2A@mail.gmail.com>
To: SW-forum <semantic-web@w3.org>, Christophe Debruyne <christophe.debruyne@gmail.com>
Op wo 29 jul. 2020 om 16:51 schreef Christophe Debruyne <

> I second that the functions provided by GeoSPARQL are limited, so I have
> little to add to that. While computationally expensive, it would be great
> to see that a future spec considering support for error margins/tolerances
> (e.g., as provided by
> https://docs.oracle.com/database/121/SPATL/sdo_geom-relate.htm#SPATL1107
> -- "Tolerance reflects the distance that two points can be apart and still
> be considered the same, for example, to accommodate rounding errors"). The
> datasets I've worked within came from authoritative sources and various
> generalizations contained very, very small rounding errors because
> maintaining topological consistency across generalizations was too
> expensive). The functions provided by GeoSPARQL do not account for that. To
> me, at least specifying the possibility of providing
> touches(?wkt1, ?wkt2)
> and
> touches(?wkt1, ?wkt2, 0.5)
> doesn't constitute much of a leap (the default tolerance value could be
> considered 0) and is backward compatible.
> Not only would this facilitate rounding errors (at the source, or while
> converting between coordinate systems), we could avail of that
> functionality provided by existing tooling and vendors.
> It furthermore offers one to see whether geometries are "roughly"
> equivalent, for instance; e.g., the convex hull of point clouds vs a
> geometry.

There is definitely a need to address accuracy or tolerance in spatial data
on the web. A problem is that the issue is broader than just spatial data.
The need to be able to indicate accuracy applies to all measured numerical
values (e.g. time, concentration, temperature). And I do believe that
design solutions should be sought at their proper abstraction level,
otherwise things will get messy. So ideally a solution should exist on that
broader level. Has there ever been an initiative in this direction for the
Semantic Web or Linked Data?

One way is somehow making sure that all measured numbers use
significant digits. That would not involve actually changing the way
measured values are encoded, just the way they are interpreted. But it
would have to be an obligation, not only for data publishers, but also for
implementers of numerical functions that have to take significant digits
into account in calculations and their output. Such a type of agreement
could help a lot to combat the proliferation of untruthful data on the web.
A requirement for significant digits is the ability to use scientific
notation. Thankfully that is possible using the commonly used XML datatypes.

Perhaps an even better way to deal with accuracy in measured numerical
values is a way to express uncertainty, like 1.23 ± 0.02. That way, we
could for example have x,y,z coordinates like (10.4±0.2,5.5±0.3,24±1). Then
the numbers can really speak for themselves. But I'm not aware of any RDF
literal type that can do this.


> On Tue, Jul 28, 2020 at 10:39 PM Adrian Gschwend <ml-ktk@netlabs.org>
> wrote:
>> On 28.07.20 22:06, Frans Knibbe wrote:
>> Hi Frans,
>> >   * Have you ever used GeoSPARQL? If so, any problems?
>> yes, love it. But it also sucks due to bad spec. Off the top of my head:
>> - spec is unreadable in its current PDF form. I tried to make a HTML
>> version myself once but gave up as the PDF is really utterly broken. So
>> please make this a proper HTML spec like SPARQL & co.
>> - I'm not sure if it's due to the bad spec, missing testcases for
>> developers (?) or lack of sample queries that clearly show how to use
>> it, but it was really hard for me to understand how I am supposed to
>> write a proper query that did what I wanted to do.
>> I'm good in SPARQL but I really did not dig GeoSPARQL for a long time.
>> For a while I also had the impression that every implementation (Fuseki
>> vs Stardog vs Virtuoso) behaved differently or was broken. It got better
>> now but it's not a good sign if those who implement it can't agree on
>> it. Might be also related to:
>> - the fact that the geometry is on its own node does not make it easy. I
>> see the point in some use-cases (hasGeometry vs defaultGeometry for
>> example), but maybe there could be a simpler representation for default
>> use-cases
>> - why there is a asWKT property beats me. I mean we have datatypes and
>> we use them so why would one name the property according to the datatype
>> used in the literal? IMO there should be a neutral property and based on
>> the datatype of the literal the store figures out what serialization it
>> is. Also not sure what is hip in the JSON world today but maybe it's
>> time for some GeoJSON like support as well. Or whatever is used in the
>> web stack nowadays.
>> >   * Domains like geography, astronomy, biology, computer graphics, web
>> >     graphics, building information modelling (BIM) and computer aided
>> >     design (CAD) all use spatial data. Have you ever tried to somehow
>> >     combine different types of spatial data or spatial knowledge? If so,
>> >     how was that experience?
>> the functions that everyone implements are rather basic and the rest is
>> often not straight forward to add so I had limited success with anything
>> besides default functions.
>> regards
>> Adrian
Received on Saturday, 1 August 2020 20:36:36 UTC

This archive was generated by hypermail 2.4.0 : Saturday, 1 August 2020 20:36:37 UTC