- From: Barry Norton <barrynorton@gmail.com>
- Date: Fri, 6 Sep 2013 20:33:23 +0100
- To: Kuno Woudt <kuno@frob.nl>
- Cc: music-ontology-specification-group@googlegroups.com, Linking Open Data <public-lod@w3.org>
- Message-ID: <CAMSTHC_99pb6uSdHhR4qHpsPvKwrVnNeqy7hBm=oROKTjwYYiQ@mail.gmail.com>
Ah, apologies, I hadn't spotted that there was an update to the VM in August, I was already preparing for the Summer School here. While I agree that having separate RDF resources might be the best solution, I'm not convinced that it would be so easy to make the current JSON API into JSON-LD, and the important thing would be to have redirects when content-negotiating from the existing (non-decorated) document URIs - is that (now) possible? If it is, wouldn't it be easier to directly return the RDF in response to an "Accept:application/rdf+xml" request rather than 30x-ing it? There are also a bunch of manipulations in the R2RML mappings that go beyond adding context - i.e. manipulation of Wikipedia URIs into DBpedia ones. That said, things are closer between the RDF and API structure now that there's the event/geo information in first class release events. Is another solution to serve pure RDF resource representations from R2RML mappings directly over the database? Or at least a replication of it? It would be nice to have that hand-in-hand with RDF dumps. Barry On Fri, Sep 6, 2013 at 8:16 PM, Kuno Woudt <kuno@frob.nl> wrote: > > Hello, > > > On 09/05/2013 02:37 PM, Barry Norton wrote: > >> By the way, the easiest way to work with the server is to use the VM but >> this hadn't been updated since last year. I have an up-to-date version >> for the Summer School and I'll bring it into the Museum next week. >> > > The current virtual machine image is from august this year, which is > fairly recent. For those interested, start here [1]. > > [1] http://wiki.musicbrainz.org/**Server_Setup<http://wiki.musicbrainz.org/Server_Setup> > > > And now that I'm responding anyway... > > As someone who has worked on the RDFa in MusicBrainz I do think that > solution is untenable. The current implementation is too brittle, and the > website changes so frequently that making sure the RDFa is still correct > after each change seems like a considerable effort. > > There are also a few things which I found difficult to express in RDFa, > but perhaps that is just because I'm not that familiar with it.. for > example: > > On the release page where a tracklist is shown there is a single table > which contains the entire tracklist of the release (even if it spans > multiple discs). > > At some point the discs were wrapped in a <div typeof="mo:Record" > about="(tracklist/medium CURIE)">, which was invalid html so that <div> had > to be removed. And we want to keep these rows in a single table and in a > single <tbody> so that their columns line up even when we don't specify > widths for them, etc... > > It is clash of trying to get the visual layout of the page correct AND the > structure of the RDFa data embedded in it which makes working on this hard > (at least it was for me :). > > So I think a better long-term solution would be to have a separate > codebase/project which takes the current musicbrainz webservice output and > turns that into rdf. I imagine the easiest way to do that would be to take > the existing JSON webservice, write a @context for it and some > transformations of the data, and use existing RDF libraries to do JSON-LD > to RDF/XML conversion. > > We could make that available under urls like https://musicbrainz.org/** > artist/45a663b5-b1cb-4a91-**bff6-2bef7bbfdd76.rdf<https://musicbrainz.org/artist/45a663b5-b1cb-4a91-bff6-2bef7bbfdd76.rdf>. > > -- kuno / warp. > >
Received on Friday, 6 September 2013 19:33:50 UTC