- From: Steve Harris <steve.harris@garlik.com>
- Date: Sat, 16 Oct 2010 07:32:37 +0100
- To: Olivier Rossel <olivier.rossel@gmail.com>
- Cc: Semantic Web <semantic-web@w3.org>
Only in SPARQL 1.1. But even there it can be tricky. You can use sub queries with limits in them, but the underpinning logic isn't necessarily what you'd expect - the sub query is executed logically "before" the outer query - so it's still quite tricky. A query to do what you want will be ludicrously verbose. In short: it's easier to do it at the application layer. Get all the towns, then do LIMIT 100 queries on each. - Steve Sent on the move. On 15 Oct 2010, at 21:23, Olivier Rossel <olivier.rossel@gmail.com> wrote: > Well, let me rephrase a little bit the subject of my mail. > I want to retrieve a set of towns, their various labels and their geolocation. > The labels can be in english and/or spanish and/or french and/or deutsch. > > My data set contains a LOT of towns. > So I will limit my query. > I plan to retrieve towns data by packs of 100 towns. > > My first idea was to use the SPARQL "LIMIT" and "OFFSET" keywords. > But I think they work at resultSet's row level, and each row of {town, > label, longitude, latitude} > counts for 1. > > So 100 rows of the resultSet do not correspond to data for 100 distinct towns. > > > Is there a way to constrain my SPARQL queries to return all the data for the > first 100 towns, then all the data for the second 100 towns, etc ? > > Note: oh, by the way, in my app, I use CONSTRUCT and SELECT ! I don't know if > that is an important point. But anyway... :) > > Any help is gladly welcome. > > Thanks. > > -- > Olivier Rossel >
Received on Saturday, 16 October 2010 06:33:55 UTC