W3C home > Mailing lists > Public > public-sdw-wg@w3.org > February 2016

Re: Report from EuroSDR meeting 2016-02-11

From: Andrea Perego <andrea.perego@jrc.ec.europa.eu>
Date: Wed, 24 Feb 2016 15:28:26 +0100
To: Frans Knibbe <frans.knibbe@geodan.nl>
Cc: SDW WG Public List <public-sdw-wg@w3.org>, Bénédicte Bucher <Benedicte.Bucher@ign.fr>, Michael Lutz <michael.lutz@jrc.ec.europa.eu>
Message-id: <56CDBE0A.60302@jrc.ec.europa.eu>
Many thanks for sharing your minutes, Frans.

Unfortunately, Michael and I were not able to attend the EuroSDR 
meeting, but this is of course very relevant to our colleagues and us, 
working on INSPIRE.

Also, I think your notes, Frans, highlight two important issues 
concerning not only the INSPIRE community but geodata providers in 
general, that can be summarised as follows:

1. What exactly should we do?

2. How can this be done?

The first point boils down to whether the target should be the Semantic 
Web or simply the Web. On this, I totally agree with the step-wise 
approach: first, the Web; then, (if need be) the Semantic Web. Of 
course, if you can do both, better. Otherwise, the primary target should 
be the Web.

AFAIK, geodata providers, in the majority of the cases, are not aware of 
the benefits of targeting the Web. Usually the discussion is on "how we 
can use LD / RDF", and not "how we can make data available to Web users 
& developers".

So, it would be indeed useful to clarify:

(a) what are the advantages (for providers and users) of publishing data 
in a webby way

(b) that, by doing (a), you've already completed a lot of work needed to 
move to the second "phase" - i.e., Linked Data, RDF, SPARQL, etc.

This relates also to the question on "how much work is needed to achieve 
this", which introduces the second point, that is about the 
strategy/approach to be adopted for supporting LD and/or making geodata 
more webby.

The main concern is usually about the impact on the existing 
infrastructures and the underlying data publication workflow. This issue 
has been always very clear to us at JRC when we started investigating 
how to integrate LD in the INSPIRE infrastructure (the initial trigger 
was facilitating cross-sector sharing and re-use of INSPIRE data).

The approach we tried to experiment aimed at building a layer on top of 
the existing platforms - possibly requiring very limited efforts and 
resources on the provider side.

This is actually what was done for the pilot exercise concerning the 
Core Location Vocabulary, for address data 
(http://location.testproject.eu/), and the same adopted in GeoDCAT-AP 
for metadata - i.e., enabling anybody having ISO 19115 metadata to serve 
them as GeoDCAT-AP, also via a CSW. The current implementations of 
GeoDCAT-AP are moving in this direction:


And, AFAIK, this is what was done in the addresses and metadata pilots 
of the Geonovum testbed.

This is to say that geodata providers do not have necessarily to 
"switch" to another platform to publish their data differently (as in 
the XML vs JSON case you mentioned, Frans). I think this is something 
that should be clearly explained.

On the other hand, the real implementation issue we saw in the work done 
so far (GeoDCAT-AP included) concerns the inconsistent use of global & 
persistent identifiers (the basis for persistent HTTP URIs), since this 
has indeed an impact on how data (and metadata) are produced and 
maintained. However, in our understanding, this is not an issue just for 
geo linked data, but something causing problems on the geospatial 
platform itself - this is quite apparent in a distributed infrastructure 
as the INSPIRE one.

So, other two points that may be worth addressing are:

(c) what I can do by exploiting my current (geo) platform, without 
changing it

(d) which features may require changes to my existing infrastructure 
and/or data publication workflow - possibly explaining also whether/how 
they bring benefits to the existing infrastructure etc.



On 24/02/2016 11:07, Frans Knibbe wrote:
> Hello all,
> Following the Geodata on the Web conference on February 10, which
> followed our face to face meetings in Amersfoort, I attended a meeting
> organized by EuroSDR <http://www.eurosdr.net/> on February 11, also
> about the topic of spatial data on the web. In their own words,
> /"EuroSDR is a not-for-profit organisation linking National Mapping and
> Cadastral Agencies with Research Institutes and Universities in Europe
> for the purpose of applied research in spatial data provision,
> management and delivery./"
> National Mapping and Cadastral Agencies (NMCAs) are an important
> provisioner of official spatial data in Europe. They provide core data
> that are well curated and in many ways can be used or enriched by
> broader communities.
> Attached is a document with the notes from the meeting. A report and
> possibly a EuroSDR position paper on the topic of spatial data on the
> web are forthcoming.
> The meeting showed that there are many different views. There is some
> interest in adopting new paradigms for publishing spatial data among
> National Mapping and Cadastral Agencies (NMCAs), but there is also some
> hesitation. One reason for that is that NMCAs have just gone through a
> rigorous change in data publication methods because of INSPIRE, which
> basically says NMCA's have to use OGC-defined services like WMS, WFS and
> CSW (with a INSPIRE flavour). They have invested heavily in this change
> and the pay-off is not all that clear, so it imaginable that a message
> that says that they should switch to HTTP URIs and use JSON instead of
> XML is not automatically met with enthusiasm.
> Personally I heard a confirmation of the idea that we should not get too
> hung up on JSON, but rather focus on recommendations on a more
> fundamental level that improve matters regardless of data format.
> Some other personal notes:
>   * Data publishers wonder if customers want Linked Data;
>   * Existing INSPIRE services are not used intensively;
>   * There is a common need for an overview of software that can be used
>     to help publishing spatial data on the web. The GeoKnow market and
>     research overview
>     <http://svn.aksw.org/projects/GeoKnow/Public/D2.1.1_Market_and_Research_Overview.pdf>
>     was helpful, but it is almost three years old now and the
>     information is not updated.  (I promised this I would report this to
>     the SDWWG as a requirement)
>   * Similar to the previous point  people would like to see an overview
>     of implementations.
>   * NMCAs should be one of the audiences for our Best Practices
>     document, as a particular type of data publisher and data curator.
>   * I think it would be good if we can present a step-by-step approach,
>     as advocated by Jeremy in his talk in Amersfoort: show that it is
>     not required to immediately go all the way with installing triple
>     stores and maintaining SPARQL endpoints, but that a few simple steps
>     like trying to use persistent URIs for things or use web standards
>     for metadata already can help a lot.
> Regards,
> Frans

Andrea Perego, Ph.D.
Scientific / Technical Project Officer
European Commission DG JRC
Institute for Environment & Sustainability
Unit H06 - Digital Earth & Reference Data
Via E. Fermi, 2749 - TP 262
21027 Ispra VA, Italy

Received on Wednesday, 24 February 2016 14:29:11 UTC

This archive was generated by hypermail 2.4.0 : Thursday, 24 March 2022 20:31:20 UTC