W3C home > Mailing lists > Public > public-sdw-wg@w3.org > April 2016

[Minutes] 2016-04-06 Coverages Sub Group

From: Phil Archer <phila@w3.org>
Date: Wed, 6 Apr 2016 15:27:46 +0100
To: SDW WG Public List <public-sdw-wg@w3.org>
Message-ID: <57051CE2.9060909@w3.org>
Today's coverages sub group minutes are at 
https://www.w3.org/2016/04/06-sdwcov-minutes and copied below.

        Spatial Data on the Web Coverages Sub Group Teleconference

06 Apr 2016

    See also: [2]IRC log

       [2] http://www.w3.org/2016/04/06-sdwcov-irc


           ScottSimmons, Kerry, Maik, sam, billroberts, duo,

           Lewis, phila, lewis, eparsons




      * [3]Topics
          1. [4]patent call
          2. [5]Brief recap of previous meeting
          3. [6]terminology for 'subsets' of coverage datasets
             (Action 152)
          4. [7]ANU work on an ontology for earth observation data
          5. [8]Sam Toyer: ANU work on an ontology for representing
             earth observation data as Linked Data (see
             https://github.com/ANU-Linked-Earth-Data/ontology )
          6. [9]criteria for assessing potential solution
      * [10]Summary of Action Items
      * [11]Summary of Resolutions

    <billroberts> ah hi Phil - we're stuck with the webex

    <billroberts> being asked for MIT certificate

    <billroberts> will try, 2 secs

    <billroberts> yes, success - many thanks Phil

    i cando it if you likje!

    <scribe> scribe: kerry

    <scribe> scribeNick: kerry

patent call [12]https://www.w3.org/2015/spatial/wiki/Patent_Call

      [12] https://www.w3.org/2015/spatial/wiki/Patent_Call

    propose: approve minutes

      [13] https://www.w3.org/2016/03/23-sdwcov-minutes

    <billroberts> +1


    <dmitrybrizhinev> +1

    resolved approve minutes

      [14] https://www.w3.org/2016/03/23-sdwcov-minutes

    RESOLUTION: approve minutes

      [15] https://www.w3.org/2016/03/23-sdwcov-minutes

Brief recap of previous meeting

    bill: reviewed requirements, talked about subsets and
    web-friendly formats, reviewed data on the Web view
    ... with some members of that group on the call
    ... large portions of coverage data are gridded (but not all)
    ... ro gridded datasets sections can be defined farly easily
    ... will work on grid first, and also look inot non-gridded for
    important cases


      [16] https://www.w3.org/2015/spatial/track/actions/152

terminology for 'subsets' of coverage datasets (Action 152)

    bill: kerry does not like 'subsetting" but I don't mind either
    ... "extract" or something comes up all the time
    ... mimimise misunderstandings

    <billroberts> kerry: there was a raging debate on mailing list.
    Most people don't mind

    <billroberts> suggestions: extract, filter, ...

    <kerry summarises email discussion>
    .Maik: notes some reasons to prefer extract as the more general
    term, but not too fussed

    bill: thinks extract may be less confusing

    Proposed: that we use "extract" as the main word in most places
    (and mention subsetting as used for same thing when

    <billroberts> +1


    <Maik> +1

    <dmitrybrizhinev> +1

    <ScottSimmons> +1

    <Duo> 0

    <sam> +1

    RESOLUTION: to encourage the use of "extract" as the main word
    in most places (and mention subsetting as used for same thing
    when introducing)


      [17] https://github.com/ANU-Linked-Earth-Data/ontology

    <sam> More verbose link with examples:


ANU work on an ontology for earth observation data

Sam Toyer: ANU work on an ontology for representing earth observation
data as Linked Data (see
[19]https://github.com/ANU-Linked-Earth-Data/ontology )

      [19] https://github.com/ANU-Linked-Earth-Data/ontology

    <dmitrybrizhinev> no I can't hear

    <sam> sorry, not sure what's going on with phone

    <Duo> should I introduce things while he gets that sorted?

    <sam> yes please

    <dmitrybrizhinev> yes

    Duo: 2 key points: using dggs for data (landsat data)

    <billroberts> DGGS: Discrete Global Grid System

    Duo: stores geospatail data in a standardised format
    ... lokking to put it into an rdf datacube using fuseki triple
    store and elda api
    ... dimitry is developing ontology inspired by coveragejson

    Dmitry: I have been writing the coveragejson spec in owl
    ... see the posted example and you can see it in rdf
    ... lets you define axes and link them to a crs, and link
    values to some other meanig
    ... this is the way coveragejson does it

    <sam> is this working?

    <dmitrybrizhinev> nope, just echoes

    <problems with sound>

    <sam> sure, that works. I only have a little bit to say.

    <sam> my part of the project is to build the API which will be
    used by the client app to access our satellite data

    <sam> I think Duo explained some of that before (Fuseki + Elda)

    maik: interesting to see coveragejson moving this way
    ... what is the main motivation/

    <sam> We've been trying to encode our data as RDF, but expose
    the service as a simple REST-ish API (at Kerry's suggestion)

    dmitrybrizhinev: seemed to be a good way to organise the data
    -- somethin like rdf data cube but is more efficient that rdf

    maik: we come from the netcdf direction and just want a little
    bit of linking..

    <sam> At the moment, I'm mostly interested in the group's
    feedback on (1) the suitability of SPARQL vs. REST-ish API from
    web developers' perspective and (2) best format for delivering
    data (JSON-LD, RDF/JSON, etc.)

    maik: how do you want to use it

    <sam> (/end comments)

    dmitrybrizhinev: exactly how it would be used is not really
    ... assuming that something a bit like the datacube would be

    billroberts: linked data and rdf in general offers the ability
    to link to anything, becuase everything gets an identifier
    ... every observation, datapoint, has a URI, so you can stuff
    about it
    ... other reason is you can combine data eg by sparql queries
    over one or several triple stores
    ... one sapect is http, another is standardisation
    ... depends on who wants to use the data and the tools they are
    used to
    ... works very well for metadata and alos provenance of data
    ... my first thought on seeing the rdf here is that he numbers
    may need a concise microformat....

    dmitrybrizhinev: that is what coveragejson does -- or could it
    even be a binary file

    billroberts: my fistreaction is that then there is not a lot of
    point in using rdf may be the worst of both worlds

    dmitrybrizhinev: do you have a suggestion? this has been
    discussed many times -- its too much, the space expolodes
    ... what if it was an rdf list, and json-ld can encode into a
    json array
    ... would this be the best of both worlds?

    billroberts: even people that like rdf hate rdf lists...
    ... maybe an approach like this linking to data in another rep
    would do...
    ... and link to a separate url to retun json in a file or
    ... could be more like coveragejson -- could addmetadata in rdf
    while using json for the numbers

    dmitrybrizhinev: yes, canot see rdf for terrabyte datasets

    Duo: melodies

    <Maik> important: people don't fetch terrabytes anyway, always
    just small parts, it's all about the API

    billroberts: yes coverage json is a product of the melodies

    duo; looking at <tiles?..> and client applicatins

    <billroberts> kerry: is coverageJSON metadata sufficient to
    describe a specific data point? or is it just metadata for a
    whole dataset or large part of a coverage?

    <dmitrybrizhinev> Yes, this was a suggestion before - that
    there can be a clear distinction between the way the data is
    stored and the way it is represented in response to a query

    <billroberts> kerry: if coveragejson provides a way to uniquely
    identify any element in the data, that should be sufficient

    <billroberts> kerry: could make a URL pattern that allows
    identifying an extract using that

    <billroberts> kerry: do we then have a sufficiently
    fine-grained way of identifying 'chunks'

criteria for assessing potential solution

    Maik: if you want to identify a single datapoint that would be
    a combination of a parameter plus a domain index (e.g. time)
    ... this could be put into a url

    <Maik> #x=1,y=2,t=2

    Maik: index based subsetting, but sometimes perople want
    coordinates instead...
    ... some apis alsways use coordinates and not indices
    ... how do we assess what is good an what is not
    ... its hould not include any type of query language so even if
    you change underlying technologythe reference does not change

    billroberts: the API or method of identifying extract should be
    independetn of implementation
    ... but this is spatial data on the web, so needs to be http
    and uris
    ... needs to to be "simple" whatever that is -- needs to be
    easy for some community of data users
    ... we are agreeing some kind of exchange language between
    people who have a lot ov coverage data and some people on the
    web who need it

    Maik: e/g like leaflet, always lat/long, no other projections
    -- so even if dataset is not stored that way it should be usabl
    that way
    ... you should offer an API based on lat/longs, so you don't
    need to know how to do british national grids

    billroberts: yes, probably wgs84
    ... that data manger should take charge of conversion between
    grid space and user CRS
    ... we want something that will persist for a while... needs to
    be not too closely tied to specific things
    ... we want something that is not too verbose becuase data is
    large and we need to transfer it in a finite amount of time


      [20] http://reading-escience-centre.github.io/covjson-playground/

    billroberts: browers will run out of resources (time and space)

    maik; havong lots of examples and tools avaialble e.g. plugins,

    billroberts: ANU work is looking at data through an API plus
    something that is consuming it
    ... would like document of waht works well and what does not
    ... will develop a stra man set of criteria
    ... wouild like examples with real data,as well as simple
    ... reminder that you are encouraged to edit pages on working
    group wiki to share information and documents for discussion
    ... eg strangth and wekanesses

    due: yes we can do that in two weeks

    <scribe> ACTION: Duo to write up what has been learn on the
    wiki for 2 weeks [recorded in

      [21] http://www.w3.org/2016/04/06-sdwcov-minutes.html#action01]

    <trackbot> Error finding 'Duo'. You can review and register
    nicknames at <[22]http://www.w3.org/2015/spatial/track/users>.

      [22] http://www.w3.org/2015/spatial/track/users

    <billroberts> trackbot, end meeting

Summary of Action Items

    [NEW] ACTION: Duo to write up what has been learn on the wiki
    for 2 weeks [recorded in

      [23] http://www.w3.org/2016/04/06-sdwcov-minutes.html#action01

Summary of Resolutions

     1. [24]approve minutes
     2. [25]to encourage the use of "extract" as the main word in
        most places (and mention subsetting as used for same thing
        when introducing)

    [End of minutes]
Received on Wednesday, 6 April 2016 14:27:53 UTC

This archive was generated by hypermail 2.4.0 : Thursday, 24 March 2022 20:31:20 UTC