W3C home > Mailing lists > Public > www-rdf-interest@w3.org > August 2002

Re: Survey of RDF data on the Web

From: Seth Russell <seth@robustai.net>
Date: Mon, 19 Aug 2002 11:01:21 -0700
Message-ID: <009301c247aa$6d751fc0$657ba8c0@c1457248a.sttls1.wa.home.com>
To: "Andreas Eberhart" <andreas.eberhart@i-u.de>, "Dan Brickley" <danbri@w3.org>
Cc: <www-rdf-interest@w3.org>

From: "Andreas Eberhart" <andreas.eberhart@i-u.de>

> I'm trying to export all the facts as one large RDF file. I used Jena ARP
> and Sergey Melnik's RDF API, but with both I'm running out of main memory
> while filling the model (i.e. before I can serialize it as RDF). Is there
> possibility where not the entire data has to be held in main memory? Maybe
> two-pass approach, where the predicate namespaces are collected in the
> pass and the data is serialized during the second pass.

Processing large RDF files is very problematic not only when writing, but
also when reading.  Perhaps the solution is not to use large files.  Instead
use a lot of small files in one directory along with a RDF index file.  We
could use a convention that a directory of RDF files is located at
http://host/path/index.rdf  and then define a simple schema for these index

Seth Russell
Received on Monday, 19 August 2002 14:01:58 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:44:37 UTC