Architecture of mapping CSV to other formats

Hi,

On the call today we discussed briefly the general architecture of mapping from CSV to other formats (eg RDF, JSON, XML, SQL), specifically where to draw the lines between what we specify and what is specified elsewhere.

To make this clear with an XML-based example, suppose that we have a CSV file like:

GID,On Street,Species,Trim Cycle,Inventory Date
1,ADDISON AV,Celtis australis,Large Tree Routine Prune,10/18/2010
2,EMERSON ST,Liquidambar styraciflua,Large Tree Routine Prune,6/2/2010
3,EMERSON ST,Liquidambar styraciflua,Large Tree Routine Prune,6/2/2010 

This will have a basic mapping into XML which might look like:

<data>
  <row>
    <GID>1</GID>
    <On_Street>ADDISON AV</On_Street>
    <Species>Celtis australis</Species>
    <Trim_Cycle>Large Tree Routine Prune</Trim_Cycle>
    <Inventory_Date>10/18/2010</Inventory_Date>
  </row>
  ...
</data>

But the XML that someone actually wants the CSV to map into might be different:

<trees>
  <tree id="1" date="2010-10-18">
    <street>ADDISON AV</street>
    <species>Celtis australis</species>
    <trim>Large Tree Routine Prune</trim>
  </tree>
  ...
</trees>

There are (at least) four different ways of architecting this:

1. We just specify the default mapping; people who want a more complex mapping can plug that into their own toolchains. The disadvantage of this is that it makes it harder for the original publisher to specify canonical mappings from CSV into other formats. It also requires people to know how to use a larger toolchain (but I think they are probably have that anyway).

2. We enable people to point from the metadata about the CSV file to an ‘executable’ file that defines the mapping (eg to an XSLT stylesheet or a SPARQL CONSTRUCT query or a Turtle template or a Javascript module) and define how that gets used to perform the mapping. This gives great flexibility but means that everyone needs to hand craft common patterns of mapping, such as of numeric or date formats into numbers or dates. It also means that processors have to support whatever executable syntax is defined for the different mappings.

3. We provide specific declarative metadata vocabulary fields that enable configuration of the mapping. For example, each column might have an associated ‘xml-name’ and ‘xml-type’ (element or attribute), as well as (more usefully across all mappings) ‘datatype’ and ‘date-format’. This gives a fair amount of control within a single file.

4. We have some combination of #2 & #3 whereby some things are configurable declaratively in the metadata file, but there’s an “escape hatch” of referencing out to an executable file that can override. The question is then about where the lines should be drawn: how much should be in the metadata vocabulary (3) and how much left to specific configuration (2).

My inclination is to aim for #4. I also think we should try to reuse existing mechanisms for the mapping as much as possible, and try to focus initially on metadata vocabulary fields that are useful across use cases (ie not just mapping to different formats but also in validation and documentation of CSVs).

What do other people think?

Jeni
--  
Jeni Tennison
http://www.jenitennison.com/

Received on Wednesday, 23 April 2014 19:13:17 UTC