W3C home > Mailing lists > Public > public-csv-wg@w3.org > April 2014

Re: Architecture of mapping CSV to other formats

From: Ivan Herman <ivan@w3.org>
Date: Thu, 24 Apr 2014 10:02:02 +0200
Cc: W3C CSV on the Web Working Group <public-csv-wg@w3.org>
Message-Id: <73A4FA13-EEF0-4DCB-959D-62708500AC2C@w3.org>
To: Jeni Tennison <jeni@jenitennison.com>
Hi Jeni,

thanks for starting this. It is good to have this discussion; I am a little bit concerned that we run ahead with one particular mapping (RDF in this case) without a larger picture.

My inclination is also #4; we should at least try to go there, ie, define a minimal level of declarative syntax for the common cases, and then let callbacks (or something similar) take care of the rest.

I think use cases are the key. Jeremy & Co have made a tremendous job in the use cases so far, and this has helped to define the CSV+ stuff. Similarly, we should try to collect real use cases that employ transformations to the various formats to see what are the basic mechanisms that are to be covered, and stop there. I am tempted to say that no feature should be included without having a real-life (and not 'academic') use case out there, although I may be over the top with this...

Ivan



On 23 Apr 2014, at 21:13 , Jeni Tennison <jeni@jenitennison.com> wrote:

> Hi,
> 
> On the call today we discussed briefly the general architecture of mapping from CSV to other formats (eg RDF, JSON, XML, SQL), specifically where to draw the lines between what we specify and what is specified elsewhere.
> 
> To make this clear with an XML-based example, suppose that we have a CSV file like:
> 
> GID,On Street,Species,Trim Cycle,Inventory Date
> 1,ADDISON AV,Celtis australis,Large Tree Routine Prune,10/18/2010
> 2,EMERSON ST,Liquidambar styraciflua,Large Tree Routine Prune,6/2/2010
> 3,EMERSON ST,Liquidambar styraciflua,Large Tree Routine Prune,6/2/2010 
> 
> This will have a basic mapping into XML which might look like:
> 
> <data>
>   <row>
>     <GID>1</GID>
>     <On_Street>ADDISON AV</On_Street>
>     <Species>Celtis australis</Species>
>     <Trim_Cycle>Large Tree Routine Prune</Trim_Cycle>
>     <Inventory_Date>10/18/2010</Inventory_Date>
>   </row>
>   ...
> </data>
> 
> But the XML that someone actually wants the CSV to map into might be different:
> 
> <trees>
>   <tree id="1" date="2010-10-18">
>     <street>ADDISON AV</street>
>     <species>Celtis australis</species>
>     <trim>Large Tree Routine Prune</trim>
>   </tree>
>   ...
> </trees>
> 
> There are (at least) four different ways of architecting this:
> 
> 1. We just specify the default mapping; people who want a more complex mapping can plug that into their own toolchains. The disadvantage of this is that it makes it harder for the original publisher to specify canonical mappings from CSV into other formats. It also requires people to know how to use a larger toolchain (but I think they are probably have that anyway).
> 
> 2. We enable people to point from the metadata about the CSV file to an ‘executable’ file that defines the mapping (eg to an XSLT stylesheet or a SPARQL CONSTRUCT query or a Turtle template or a Javascript module) and define how that gets used to perform the mapping. This gives great flexibility but means that everyone needs to hand craft common patterns of mapping, such as of numeric or date formats into numbers or dates. It also means that processors have to support whatever executable syntax is defined for the different mappings.
> 
> 3. We provide specific declarative metadata vocabulary fields that enable configuration of the mapping. For example, each column might have an associated ‘xml-name’ and ‘xml-type’ (element or attribute), as well as (more usefully across all mappings) ‘datatype’ and ‘date-format’. This gives a fair amount of control within a single file.
> 
> 4. We have some combination of #2 & #3 whereby some things are configurable declaratively in the metadata file, but there’s an “escape hatch” of referencing out to an executable file that can override. The question is then about where the lines should be drawn: how much should be in the metadata vocabulary (3) and how much left to specific configuration (2).
> 
> My inclination is to aim for #4. I also think we should try to reuse existing mechanisms for the mapping as much as possible, and try to focus initially on metadata vocabulary fields that are useful across use cases (ie not just mapping to different formats but also in validation and documentation of CSVs).
> 
> What do other people think?
> 
> Jeni
> --  
> Jeni Tennison
> http://www.jenitennison.com/
> 


----
Ivan Herman, W3C 
Digital Publishing Activity Lead
Home: http://www.w3.org/People/Ivan/
mobile: +31-641044153
GPG: 0x343F1A3D
FOAF: http://www.ivan-herman.net/foaf






Received on Thursday, 24 April 2014 08:02:33 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:21:39 UTC