Re: Scoping Question

Reading the various contributions to the scoping discussion, I'm not sure there is much difference in practice between the two 'camps'.

In approach 1, we're talking about encouraging users to publish their CSV files in a particular style or following some set of best practices that the group will recommend, so perhaps using a subset of all the CSV approaches seen in the wild.  Then adding metadata of some form to explain structure, semantics etc

In approach 2, we're talking about creating a new CSV-like format which is backward compatible with existing CSV tools, which might end up looking like a dialect of current CSV plus a way to specify metadata....

In Jeni's initial posing of the question, she seems to be associating with 'Approach 1' the objective to handle all kinds of CSV as seen in the wild, which creates some challenges in making the metadata format flexible enough.  None of the responses I've seen seem to be suggesting that we should support *all* kinds of CSV file.

A solution that uses existing CSV features in a particular way, plus a metadata format of some sort, seems close to a consensus, if I interpret everyone's comments/intentions correctly.

Regards

Bill


On 21 Feb 2014, at 17:31, Jeni Tennison <jeni@jenitennison.com> wrote:

> Hi,
> 
> [Only just got net connection to enable me to send this.]
> 
> A scoping question occurred to me during the call on Wednesday.
> 
> There seem to be two approaches that we should explicitly choose between.
> 
> APPROACH 1: Work with what’s there
> 
> We are trying to create a description / metadata format that would enable us to layer processing semantics over the top of all the various forms of tabular data that people publish so that it can be interpreted in a standard way.
> 
> We need to do a survey of what tabular data exists in its various formats so that we know what the description / metadata format needs to describe. When we find data that uses different separators, pads out the actual data using empty rows and columns, incorporates two or more tables inside a single CSV file, or uses Excel spreadsheets or DSPL packages or SDF packages or NetCDF or the various other formats that people have invented, we need to keep note of these so that whatever solution and processors we create will work with these files.
> 
> APPROACH 2: Invent something new
> 
> We are trying to create a new format that would enable publishers to publish tabular data in a more regular way while preserving the same meaning, to make it easier for consumers of that data.
> 
> We need to do a survey of what tabular data exists so that we can see what publishers are trying to say with their data, but the format that they are currently publishing that data in is irrelevant because we are going to invent a new format. When we find data that includes metadata about tables and cells, or groups or has cross references between tables, or has columns whose values are of different types, we need to keep note of these so that we ensure the format we create can capture that meaning.
> 
> We also need to understand existing data so that we have a good backwards compatibility story: it would be useful if the format we invent can be used with existing tools, and if existing data didn’t have to be changed very much to put it into the new format. But there will certainly be files that do have to be changed, and sometimes substantially.
> 
> 
> My focus is definitely on the second approach as I think taking the first approach is an endless and impossible task. But some recent mails and discussion has made me think that some people are taking the first approach. Any thoughts?
> 
> Cheers,
> 
> Jeni
> --  
> Jeni Tennison
> http://www.jenitennison.com/
> 

Received on Sunday, 23 February 2014 15:14:14 UTC