RE: Linked Media: Extending Linked Data for Updates and arbitrary Media Formats using the REST Principles

Sebastian, Michael, can I also point out Linked Open Services [1] as related work.

This approach also tries to bring together the full uniform interface of REST with Linked Data principles, and provides one answer to Michael's question about the role of SPARQL in common with other approaches, such as Linked Data Services [2], which are coming together as Linked Services [3].

I'm particularly excited about Sebastian's media-oriented approach as I've been working for just over a month now with MusicBrainz, on the LinkedBrainz project at Queen Mary's, trying to push these aims - some people might remember Paul Groth's challenge to the Linked Open Service presentation at the Future Internet Symposium that MusicBrainz "used to have such an RDF API and deprecated it".

Just one more plug: following the success of the ISWC Linked Open Service tutorial, there will be two Linked Service tutorials at ESWC - one at the conference and one at the Summer School.

Barry

[1] linkedopenservices.org/blog
[2] openlids.org
[3] linkedservices.org



-----Original Message-----
From: public-lod-request@w3.org on behalf of Michael Hausenblas
Sent: Thu 05/05/2011 10:34
To: Sebastian Schaffert
Cc: public-lod
Subject: Re: Linked Media: Extending Linked Data for Updates and arbitrary Media Formats using the REST Principles
 

Sebastian,

Good stuff and timely, indeed. Can you please tell me, how this  
relates to TimBL's notes [1] [2] (if it does)?

I'm especially interested in the following:


  + How exactly is SPARQL utilised in your proposal? See also [3] and  
[4] for related work.
  + How is authentication and authorisation handled (like WebID [5]  
and WAC [6])?

Cheers,
	Michael
[1] http://www.w3.org/DesignIssues/ReadWriteLinkedData.html
[2] http://www.w3.org/DesignIssues/CloudStorage.html
[3] http://www.w3.org/TR/sparql11-http-rdf-update/
[4] http://portal.acm.org/citation.cfm?id=1645412
[5] http://www.w3.org/2005/Incubator/webid/spec/
[6] http://www.w3.org/wiki/WebAccessControl
--
Dr. Michael Hausenblas, Research Fellow
LiDRC - Linked Data Research Centre
DERI - Digital Enterprise Research Institute
NUIG - National University of Ireland, Galway
Ireland, Europe
Tel. +353 91 495730
http://linkeddata.deri.ie/
http://sw-app.org/about.html

On 5 May 2011, at 09:12, Sebastian Schaffert wrote:

> Dear all,
>
> in the context of our work in Salzburg NewMediaLab and the KiWi EU  
> project before, we had an idea that I would like to get feedback  
> from the Linked Data Community. We are also writing on an article  
> about it (probably for ISWC), but I think it makes sense to discuss  
> the idea in advance. Maybe there is also a bit of related work that  
> we are not yet aware of.
>
> Salzburg NewMediaLab is a close-to-industry research project in the  
> media/broadcasting and the enterprise knowledge management domain.  
> Goal of the current phase is to connect enterprise archives  
> (multimedia but also other) with Linked (Open) Data sources to  
> provide added value. In this context, it is not only relevant to  
> publish and consume Linked Data, we also had the requirements to be  
> able to easily update Linked Data and also to manage content and  
> metadata in a uniform way. We therefore call our extension "Linked  
> Media", and I am going to briefly describe it in a rather informal  
> way.
>
> Background
> ----------
>
> The idea is a kind of combination of concepts from Linked Data,  
> Media Management and Enterprise Knowledge Management (from KiWi). Up  
> till now, the Linked Data world is read-only and primarily concerned  
> with the structured data associated with a resource (regardless of  
> whether this data is represented in RDF or visualised in HTML).  
> However, in order to build more interactive mashups, it would make  
> sense to also allow updates to the data in Linked Data servers. And  
> in enterprise settings, it makes sense to have a unified means to  
> manage both structured data and human-readable content for a  
> resource. For example, a resource might represent a video on the  
> internet, and depending on how I access the video I want to get  
> either the video itself or the structured metadata about the video  
> (e.g. a list of RDF links to DBPedia for all persons depicted in the  
> video).
>
> Our Linked Media idea tries to address both issues:
> - it extends the Linked Data principles with RESTful principles for  
> addition, modification, and deletion of resources
> - it extends the Linked Data principles by means to manage content  
> and meta-data alike using MIME to URL mapping
>
>
> Linked Media Idea
> -----------------
>
> 1. extending the Linked Data principles for updates using REST
>
> Linked Data is currently "read-only" and depending on Accept headers  
> in the HTTP request, it redirects a request to the appropriate  
> representation (RDF or HTML). For supporting updates in Linked Data,  
> a consequent extension of Linked Data is to apply REST and otherwise  
> use the same or analgous principles. This means that GET is used to  
> retrieve a resource, POST is used to create a resource, PUT is used  
> to update a resource, and DELETE is used to remove a resource. In  
> case of GET, the Accept header determines what to retrieve and  
> redirects to the appropriate URL; in case of PUT, the Content-Type  
> header determines what to update and also redirects to the  
> appropriate URL. This extension is therefore fully backwards  
> compatible to Linked Data, i.e. each Linked Media server is a Linked  
> Data server.
>
> 2. extending the Linked Data principles for arbitrary content using  
> MIME mapping and "rel" Content Type
>
> Linked Data currently distinguishes between an RDF representation  
> and a human readable representation in the GET request. The GET  
> request then redirects either to the URL of the RDF representation  
> or to the URL of the human readable (HTML) representation. We  
> extended this principle so that it can handle arbitrary formats  
> based on the MIME type in Accept/Content-Type headers and so that it  
> can still distinguish between content and metadata based on the  
> "rel" extension for Accept/Content-Type headers.
>
> The basic idea is to rewrite resource URLs of the form http://localhost/resource/1234 
>  depending on the MIME type as follows:
> - if the Accept/Content-Type header is of the form "Accept: type/ 
> subtype; rel=content", then the redirect URL is http://localhost/content/type/subtype/1234 
> ; when this URL is requested, the Linked Media Server will deliver  
> the CONTENT of the resource in the content type passed, or it will  
> return "415 Unsupported Media Format" in case the content is not  
> available in this MIME type
> - if the Accept/Content-Type header is of the form "Accept: type/ 
> subtype; rel=meta", them the redirect URL is http://localhost/meta/type/subtype/1234 
> ; when the URL is requested, the Linked Media Server will delivered  
> METADATA associated with the resource and tries to seralise it in  
> the content type passed (or again returns 415); in this way,  
> different RDF serialisations can be supported, e.g. RDF/JSON
>
> The differentiation between content and metadata using "rel" is  
> necessary for a "kind-of" reification, because we need to be able to  
> distinguish between an RDF/XML document stored in the server (i.e.  
> content) and the metadata associated with it (i.e. metadata). The  
> same holds for human-readable content: "Accept: text/html;  
> rel=content" will return the HTML content, while "Accept: text/html;  
> rel=meta" will return an HTML visualisation of the metadata (e.g. as  
> a table).
>
> This extension is also fully backwards-compatible to Linked Data,  
> because the default behaviour (if no "rel" is given) is "rel=meta".  
> So in case a Linked Data client accesses the Linked Media server, it  
> will behave as expected.
>
>
>
>
> Implementation / Principles
> ---------------------------
>
> What we implemented in our Linked Media Server is therefore the  
> following extensions (to simplify the presentation, I am always  
> using a concrete resource URI, but it can be more or less arbitrary):
>
> - GET http://localhost/resource/1234
> in case the resource does not exist, returns a "404 Not Found",  
> otherwise, a "303 See Other" as follows:
> 1. header: "Accept: type/subtype; rel=content" will redirect to http://localhost/content/type/subtype/1234 
> ; requesting the URL will return the content in the format requested  
> by the MIME type if available; the HTTP response will then contain a  
> "Link:" header linking to all metadata representations of the URL in  
> all metadata serialisation formats supported; in case the resource  
> is not available in the format requested, returns "415 Unsupported  
> Media Type"
> 2. header: "Accept: type/subtype; rel=meta" will redirect to http://localhost/meta/type/subtype/1234 
> ; requesting the URL will return the RDF metadata about the resource  
> in the format requested by the MIME type; the HTTP response will  
> then contain a "Link:" header linking to all content representations  
> of the URL in all content formats available; in case the resource  
> metadata is not available in the format requested, returns "415  
> Unsupported Media Type"
>
> - POST http://localhost/resource/1234
> will create the resource with the URI given and return "201  
> Created"; the response will contain a "Location:" header pointing to  
> the URL
>
> - POST http://localhost/resource
> will create a resource with a random URI and return "201 Created";  
> the response will contain a "Location:" header pointing to the  
> generated URL
>
> - PUT http://localhost/resource/1234
> in case the resource does not exist, returns a "404 Not Found",  
> otherwise a "303 See Other" analoguous to GET, but instead of the  
> "Accept" header, the "Content-Type" header is used:
> 1. header: "Content-Type: type/subtype; rel=content" will redirect  
> to the content location as in GET; a subsequent PUT to the  
> redirected URL will upload the content in the given MIME type and  
> stored on the server; returns a "200 Ok" in case of successful  
> upload or different error codes in case of errors
> 2. header:  "Content-Type: type/subtype; rel=meta" will redirect to  
> the metadata location as in GET; a subsequent PUT to the redirected  
> URL will upload the metadata in the given MIME type, which will then  
> be parsed by the server and stored in the triple store; returns a  
> "200 Ok" in case of successful upload or a "415 Unsupported Media  
> Type" in case the parser does not exist, or also other error codes
>
> - DELETE http://localhost/resource/1234
> in case the resource does not exist, returns a "404 Not Found",  
> otherwise removes the resource and returns a "200 Ok"; removing the  
> resource removes all content and all metadata (currently all triples  
> where the resource is subject, object or predicate) from the triple  
> store
>
>
> This all is implemented in our Liked Media Server (aka KiWi 2.0)  
> where we will have a first pre-release end of the week. We are also  
> preparing a screencast of a demo of the implementation, which in  
> addition to these Linked Media functionalities also offers a  
> Semantic Search that allows a uniform way of fulltext querying and  
> SPARQL querying.
>
>
> What do you think of this idea? Would this be a reasonable extension  
> to Linked Data? In any case it fulfills the requirements that we  
> have in our industry projects, so we will follow it further. As I  
> said, we are also planning a more detailed and elaborated submission  
> to the ISWC In-Use Track and also to the ISWC Demo Track, so there  
> will hopefully be a chance to also discuss it there ...
>
>
> Greetings,
>
>
>
> Sebastian
> -- 
> | Dr. Sebastian Schaffert          sebastian.schaffert@salzburgresearch.at
> | Salzburg Research Forschungsgesellschaft  http://www.salzburgresearch.at
> | Head of Knowledge and Media Technologies Group          +43 662  
> 2288 423
> | Jakob-Haringer Strasse 5/II
> | A-5020 Salzburg
>
>

Received on Thursday, 5 May 2011 09:08:34 UTC