- From: Tom Morris <tfmorris@gmail.com>
- Date: Fri, 30 Mar 2012 12:05:09 -0400
- To: public-lod@w3.org
Sorry to revive an old thread, but I never saw an answer to the first question here. (I'm not interested in DBpedia so much as the general case). On Mon, Aug 8, 2011 at 11:15 AM, Basil Ell <basil.ell@kit.edu> wrote: > I wonder about the limit of triples when accessing DBpedia URIs: > > $ rapper -c "http://dbpedia.org/resource/Netherlands" > rapper: Parsing URI http://dbpedia.org/resource/Netherlands with parser > rdfxml > rapper: Parsing returned 2001 triples > > When I access that URI by browser I receive the complete data, this means > that machines are underprivileged > whereas they are the ones that are capable of processing the amount of data > instead of human users. > > Wouldn't it be nice to: > 1) whenever such a limit is applied to return a triple that states that a > limit has been applied, > then the machine knows that it does not know everything there is to know Clearly there are going to be cases where an RDF generator is going to be unable to return complete results due to resource constraints. What should happen in that case? Should it return an error and no results? Partial results along with some kind of alternate status indicating that they are partial results? Something else? What spec should I look in to find the form of the partial results indicator (assuming there is such a beast)? Tom
Received on Friday, 30 March 2012 16:05:39 UTC