- From: Basil Ell <basil.ell@kit.edu>
- Date: Mon, 8 Aug 2011 17:15:14 +0200
- To: <public-lod@w3.org>
Hi,
I wonder about the limit of triples when accessing DBpedia URIs:
$ rapper -c "http://dbpedia.org/resource/Netherlands"
rapper: Parsing URI http://dbpedia.org/resource/Netherlands with
parser rdfxml
rapper: Parsing returned 2001 triples
When I access that URI by browser I receive the complete data, this
means that machines are underprivileged
whereas they are the ones that are capable of processing the amount of
data instead of human users.
Wouldn't it be nice to:
1) whenever such a limit is applied to return a triple that states
that a limit has been applied,
then the machine knows that it does not know everything there is
to know and
2) to include triples based on their expected relevance? For example
the rdfs:label is generally of interest.
Best regards,
Basil Ell
Received on Monday, 8 August 2011 17:17:14 UTC