Re: Hydra use case: Linked Data Fragments (ISSUE-30)

>> Downside: in both cases, the input box in the UI
>> displays something different than what you've typed in.
>> But I guess that's a minor one.
> 
> Redirects increase latency without bringing a real advantage in this case.

Caching maybe. I guess a Content-Location: header would be better.

> You could work around the UI issue by doing an AJAX call instead of
> reloading the whole page. Loading JSON-LD would make it quite trivial to do
> the templating client-side.

Exactly, although Turtle might be faster in this particular case since the output are triples.

>>> I had something similar in mind. I was thinking of something like
>>> "ValueOnly" which would correspond to your "TextualSerialization" (IRI
>>> as-is, only lexical form of literals) and "FullRepresentation" (with
>>> a better name) which would correspond to NodeSerialization.
>> 
>> Good. Should I start building something like this?
> 
> That would be fantastic. Then we'd have something to iterate on.

Okay. Will do the coming days.

>> Concretely, how to represent 'literal with a quote " inside'?
>> Does that become
>>    - "literal with a quote " inside" (i.e., double quotes as markers,
>> no escaping)
>>    - "literal with a quote \" inside" (i.e., double quotes are special
>> chars, escape)
> 
> Hmm.. I don't care that much but the former looks a bit strange to me. We
> also need to consider datatypes and language tags. So we have 4 cases:
> 
>  1) IRI (+ bnode ID)
>  2) literal of type rdf:string
>  3) literal of type rdf:langString
>  4) literal of any other type
> 
> We could of course also combine 2 & 4 by always including the datatype

My current answer is:
1) < … > or _:x
2) "string"
3) "string"@en
4) "string"^^<type>

Gregg remarked that <…> are not necessary.
I'd just them to differentiate between qnames and full IRIs,
but that might not be needed.

>> Both can coexist;
>> with OWL, we could say that hydra:filter has the restriction
>> that the property paths are always direct mappings.
> 
> I haven't checked that yet. Do you have that declaration at hand?

It depends on how we'd express property paths.

>> But at the moment, this selection happens on the meta-level
>> and not the data-level, i.e., filtering on triple's
>> subject/predicate/object,
>> not on the actual concepts they describe.
>> 
>>> I meant a Hydra ApiDocumentation along with the used
>>> vocabularies basically provides a client a(nincomplete) map
>>> of the graph a service is exposing. Could that map be used to
>>> dynamically solve queries?
>> 
>> I see. That could be interesting indeed, but might be a little to deep
>> for Hydra
>> (and the reason there would be an LDF vocabulary).
> 
> I'm not sure that anything new is needed, it's more about exploiting the
> information that's already available.

It would be needed we'd also need to detail how each part of the graph can be accessed,
i.e., what kind of fragments your are offering.
Without that, no query plan, without query plan, exponential times
(or you'd have to download the whole thing before querying it).

>> The goal is in the future that there are many types of fragments,
>> even some that servers can define themselves.
>> Client could then dynamically decide how to approach a certain query
>> optimally.
> 
> I think my question goes more in the opposite direction. An API exposes
> numerous very small LDFs.. the question is how to *collect* or find the
> relevant LDFs to get all the data to be able to answer complex queries.

That actually seems the same question to me.
You'd need to know what kind of fragments are offered
to know what fragments you need and how to collect them.
See http://linkeddatafragments.org/publications/ldow2014.pdf#page=5
for a collection algorithm for a specific kind of fragments.
But l guess you'll get to that on the train tomorrow :-)

Cheers,

Ruben

Received on Monday, 17 March 2014 21:02:58 UTC