- From: Souza, Renan F. S. <renan123@missouristate.edu>
- Date: Sat, 19 Jul 2014 14:48:00 -0300
- To: Luca Matteis <lmatteis@gmail.com>
- Cc: "semantic-web@w3.org Web" <semantic-web@w3.org>
Received on Saturday, 19 July 2014 17:49:06 UTC
Not sure if triple store implementations allow you to do that directly. One thing you could try is to use LIMIT and OFFSET (with ORDER BY) modifiers so that the result would fit in memory, then you write the result in a file. Do that as many times as needed until you have no more results left. That would work if each query that uses LIMIT, OFFSET and ORDER BY does not take too long to run. You can use the COUNT modifier to check how many times you would need to do that. Of course, if the results are really that big, I would write a simple program to do the job. On Fri, Jul 18, 2014 at 6:57 PM, Luca Matteis <lmatteis@gmail.com> wrote: > Hello, > > I'm executing a SPARQL query against a large endpoint I've setup > locally. The problem is that the result of this query is too large to > be held in memory. Are there endpoints that allow me to stream the > results to disk? For example, if it's a CONSTRUCT query it could > stream the N-Triples line by line to disk. > > Thank you, > Luca > > -- Thank you! Regards, Souza, Renan F. S. Bachelor of Computer Science Missouri State University, Springfield, MO Masters in Computer Systems Engineering Federal University of Rio de Janeiro, Brazil +55-21-99257-3934 Personal email: renan-francisco@hotmail.com
Received on Saturday, 19 July 2014 17:49:06 UTC