Re: ISSUE-66: LinkedDataT

Hi Markus,

Maybe, for those that don't have time to watch the recording of the APICraft session, you can give a brief summary of the different approaches? Also would be interesting to hear your opinion on to what extent they re-invent or are compatible with the basic RDF data model.

John

On 5 Aug 2014, at 15:39, "Markus Lanthaler" <markus.lanthaler@gmx.net> wrote:

> On 5 Aug 2014 at 14:20, Ruben Verborgh wrote:
>>>> I'd dare to say that the majority of people do assume
>>>> that Linked Data is just done with RDF.
>>> 
>>> That's obviously true for the Semantic Web community. Not so true
>>> for the rest of the world :-)
>> 
>> I thought the only people who cared about Linked Data
>> were those in the Semantic Web community. My bad!
>> 
>> Any examples of non-RDF Linked Data in the wild?
> 
> There's a lot of buzz around Hypermedia APIs at the moment. Some people
> describe this as Linked APIs or also Linked Data without thinking about RDF
> for a second.
> 
> 
>>>> So to what extent is it then necessary to clarify this?
>>> 
>>> I think it is very important as our group is not a homogenous
>>> group of Semantic Web experts.
>> 
>> Still not fully convinced there are people
>> who don't think of "RDF" when hearing "Linked Data".
>> Could you point me to examples?
> 
> I have been at lots of developer conferences in the last couple of months.
> Very few people there have any Semantic Web background. Nevertheless they
> use terms like "linked data" to talk about data that contains hyperlinks.
> 
> 
>>>> What do you think about the current introduction
>>>> to the triple pattern fragments spec [1]?
>>> 
>>> It's quite nice but I think it could be further improved, especially for
>>> people without a lot of SemWeb background.
>> 
>> Any suggestions?
> 
> I think the proposal further down would be a first step.
> 
> 
>>>>   By publishing Linked Data [LINKED-DATA],
>>>>   we enable automated clients to consume information.
>>> 
>>> Hmm... automated clients such as Google are quite happy consuming plain
> old
>>> HTML... I know what you are trying to say but people who haven't spent a
>>> whole lot of time on this won't understand it, I think.
>> 
>> Instead of "consume":
>> - "understand" (not the right word)
>> - "interpret" (what does that mean)
>> - . ?
>> 
>> "interpret" might be best!
> 
> If it doesn't add anything, just leave it out.
> 
> 
>>> Maybe it would be more straightforward to explain it the other way round:
>>> - documents are in natural language
>>> - machines are bad in understanding natural language
>>> - machines prefer structured data using unambiguous identifiers
>>> - the Web uses URLs* as identifiers
>>> - RDF allows data to be expressed in a machine-processable way by
>>>   leveraging URLs
>>> (- RDF expresses data in the form of triples) -- could be omitted
>>> - RDF can be serialized in various formats such as JSON-LD, HTML+RDFa,
> or
>>>   Turtle
>> 
>> I suppose I could rewrite it like that, yes!
> 
> Do others think this clarifies things?
> 
> 
>>> I would also suggest to use a different term than "Linked Data document".
> Is
>>> it actually needed or could we also get rid of this concept?
>> 
>> I used to call them colloquially "subject pages";
>> I think it was Olaf who recommended me "Linked Data document".
>> 
>> Any term that's more clear is good for me.
> 
> What about "RDF representations"? Swapping section 4.1 and 4.2 might make
> this simpler as you could simply say that a (RDF) "data dump" is the union
> of all RDF "representations" of a dataset/API/whatever.
> 
> 
> --
> Markus Lanthaler
> @markuslanthaler
> 
> 
> 
> 

Received on Tuesday, 5 August 2014 14:25:23 UTC