- From: Luca Matteis <lmatteis@gmail.com>
- Date: Tue, 28 May 2013 10:18:11 +0200
- To: Linked Data community <public-lod@w3.org>
- Message-ID: <CALp38EMvNP=7w8_3=a2uWBtkVT9mQf7peXHP+XSK=+NGt-07+g@mail.gmail.com>
Here's my scenario: I have several different datasets. Most in MySQL databases. Some in PostrgreSQL. Others in MS Access. Many in CSV. Each one of these datasets is maintained by its own group of people. Now, my end goal is to have all these datasets published as 5 stars Linked Open Data. But I am in doubt between these two solutions: 1) Give a generic wrapper tool to each of these groups of people, that would basically convert their datasets to RDF, and allow them to publish this data as LOD automatically. This tool would allow them to publish LOD on their own, using their own server (does such a generic tool even exist? Can it even be built?). 2) Scrape these datasets, which are at times simply published on the Web as HTML paginated tables, or published as dumps on their server, for example a .CSV dump of their entire database. Then I would aggregate all these various datasets myself, and publish them as Linked Data. Pros and cons for each of these methods? Any other ideas? Thanks!
Received on Tuesday, 28 May 2013 08:18:43 UTC