QB Data Cube Dicing. Was: Coverage subgroup update

Rob, Jon, Simon, Josh, Bill and colleagues,

Apologies for spinning off another thread, but this seems a good time and place. Kick me well into touch if you wish.

I have been interested in sub-setting data cubes, as a potentially scalable, sustainable approach to supporting large numbers of users/clients on lightweight devices. Think generalisation of map tiles to:

a)      Point clouds, vectors, 3D geometries;

b)      N dimensional map tiles, including non-spatial and non-temporal dimensions;

c)       Pokemon-Go-Cov;

d)      The WindAR proof of concept from me, Mike Reynolds and Christine Perey a couple of years ago;

e)      RDF QB model ‘diced’ as well as ‘sliced’

f)       Etc.

I thought that the QB model would have enough generality but was disappointed to find slices only (but pleased at the simplicity, rigour and generality). There was a move in W3C to have some more granularity, but In understand that that was driven by the statistical spreadsheet ISO people in the direction of pivot tables and temporal summaries, and quite rightly failed.

I would like to increase the generality in the direction of dicing as I said. For example, having sliced an n-D cube across a dimension to obtain an (n-1)-D cube, it could be still too big, so tile it/pre-format/dice once at server side. Map tile sets are the traditional example.

I think and hope we should be able to rattle of a reasonably good extension of QB as a general (non-spatial) concept, and then produce some convincing use cases or examples, including spatial and temporal, to make it worthwhile.

Roger Brackin and I failed miserably to get much traction with an OGC SWG last year, but I now see many more implementations coercing map tiles, in both 2-D and 3-D, for rasters, point clouds, vectors, geometry and more, to disseminate or give access to big data. Of course, many Met Ocean use cases are for n-D gridded data, where n is 3,4,5,6, …, etc.

So what do you think?

Chris

From: Jon Blower [mailto:j.d.blower@reading.ac.uk]
Sent: Wednesday, July 20, 2016 12:50 AM
To: Simon.Cox@csiro.au; bill@swirrl.com; public-sdw-wg@w3.org
Cc: m.riechert@reading.ac.uk
Subject: Re: Coverage subgroup update

Hi Simon,


>  QB provides a data model that allows you to express sub-setting operations in SPARQL. That looks like a win to me. I.e. think of QB as an API, not a payload.

I’m not an expert in QB by any means, but I understand that the subsetting in QB essentially means taking a Slice (in their terminology), which is a rather limited kind of subset. I didn’t see a way of taking arbitrary subsets (e.g. by geographic coordinates) in the way that WCS could. Can you expand on this, perhaps giving some examples of different subset types that can be expressed in SPARQL using QB?

Cheers,
Jon

From: "Simon.Cox@csiro.au<mailto:Simon.Cox@csiro.au>" <Simon.Cox@csiro.au<mailto:Simon.Cox@csiro.au>>
Date: Wednesday, 20 July 2016 00:02
To: "bill@swirrl.com<mailto:bill@swirrl.com>" <bill@swirrl.com<mailto:bill@swirrl.com>>, "public-sdw-wg@w3.org<mailto:public-sdw-wg@w3.org>" <public-sdw-wg@w3.org<mailto:public-sdw-wg@w3.org>>
Cc: Maik Riechert <m.riechert@reading.ac.uk<mailto:m.riechert@reading.ac.uk>>, Jon Blower <sgs02jdb@reading.ac.uk<mailto:sgs02jdb@reading.ac.uk>>
Subject: RE: Coverage subgroup update


>  The main potential drawback of the RDF Data Cube approach in this context is its verbosity for large coverages.

For sure. You wouldn’t want to deliver large coverages serialized as RDF.

*But* - QB provides a data model that allows you to express sub-setting operations in SPARQL. That looks like a win to me. I.e. think of QB as an API, not a payload.

From: Bill Roberts [mailto:bill@swirrl.com]
Sent: Wednesday, 20 July 2016 6:42 AM
To: public-sdw-wg@w3.org<mailto:public-sdw-wg@w3.org>
Cc: Maik Riechert <m.riechert@reading.ac.uk<mailto:m.riechert@reading.ac.uk>>; Jon Blower <j.d.blower@reading.ac.uk<mailto:j.d.blower@reading.ac.uk>>
Subject: Coverage subgroup update

Hi all

Sorry for being a bit quiet on this over the last month or so - it was as a result of a combination of holiday and other commitments.

However, some work on the topic has been continuing.  Here is an update for discussion in the SDW plenary call tomorrow.

In particular I had a meeting in Reading on 5 July with Jon Blower and fellow-editor Maik Riechert.

During that we came up with a proposed approach that I would like to put to the group.  The essence of this is that we take the CoverageJSON specification of Maik and Jon and put it forward as a potential W3C/OGC recommendation.  See https://github.com/covjson/specification/blob/master/spec.md for the current status of the CoverageJSON specification.

That spec is still work in progress and we identified a couple of areas where we know we'll want to add to it, notably around a URI convention for identifying an extract of a gridded coverage, including the ability to identify a single point within a coverage. (Some initial discussion of this issue at https://github.com/covjson/specification/issues/66).

Maik and Jon understandably feel that it is for others to judge whether their work is an appropriate solution to the requirements of the SDW group.  My opinion from our discussions and initial review of our requirements is that it is indeed a good solution and I hope I can be reasonably objective about that.

My intention is to work through the requirements from the UCR again and systematically test and cross-reference them to parts of the CovJSON spec. I've set up a wiki page for that: https://www.w3.org/2015/spatial/wiki/Cross_reference_of_UCR_to_CovJSON_spec  That should give us a focus for identifying and discussing issues around the details of the spec and provide evidence of the suitability of the approach (or not, as the case may be).

There has also been substantial interest and work within the coverage sub-group on how to apply the RDF Data Cube vocabulary to coverage data, and some experiments on possible adaptations to it.  The main potential drawback of the RDF Data Cube approach in this context is its verbosity for large coverages.  My feeling is that the standard RDF Data Cube approach could be a good option in the subset of applications where the total data volume is not excessive - creating a qb:Observation and associated triples for each data point in a coverage.  I'd like to see us prepare a note of some sort to explain how that would work.  I also think it would be possible and desirable to document a transformation algorithm or process for converting CoverageJSON (with its 'abbreviated' approach to defining the domain of a coverage) to an RDF Data Cube representation.

So the proposed outputs of the group would then be:

1) the specification of the CoverageJSON format, to become a W3 Recommendation (and OGC equivalent)
2) a Primer document to help people understand how to get started with it.  (Noting that Maik has already prepared some learning material at https://covjson.gitbooks.io/cookbook/content/)
3) contributions to the SDW BP relating to coverage data, to explain how CovJSON would be applied in relevant applications
4) a note on how RDF Data Cube can be used for coverages and a process for converting CovJSON to RDF Data Cube

Naturally I expect to discuss this proposal in plenary and coverage sub-group calls!

Best regards

Bill

Received on Wednesday, 20 July 2016 17:31:43 UTC