Fw: RDF tools as workhorse

I meant to do a reply-all with this email.

P.S. When I looked again at the ISO KB,
there were about 3000 lines of other stuff.
The RDF/OWL part is about 5000 lines.

Dick McCullough
knowledge := man do identify od existent done;
knowledge haspart proposition list;
http://rhm.cdepot.net/
----- Original Message ----- 
From: "Richard H. McCullough" <rhm@volcano.net>
To: "Mailing Lists" <list@thirdstation.com>
Sent: Wednesday, September 14, 2005 6:38 AM
Subject: Re: RDF tools as workhorse


>I use the MKE/MKR system (click on link below my name),
> which is currently integrated with standard GDBM databases.
> My emphasis has been on simplicity and flexibility in the user
> interface.   The interface language, MKR, includes queries,
> n-ary relations, and methods.
> 
> I have not done much work on performance issues, but
> I am confident that they can be solved as well in an 
> MKR-based system as in any other system.
> 
> My biggest projects to date have been a Genealogy KB 
> of about 1000 persons, and an ISO standards KB 
> of about 8000 lines of RDF.
> 
> I also have an MKR interface to the Stanford TAP KB
> and the OpenCyc KB.
> 
> Dick McCullough
> knowledge := man do identify od existent done;
> knowledge haspart proposition list;
> http://rhm.cdepot.net/
> 
> ----- Original Message ----- 
> From: "Mailing Lists" <list@thirdstation.com>
> To: <semantic-web@w3.org>
> Sent: Tuesday, September 13, 2005 1:46 PM
> Subject: RDF tools as workhorse
> 
> 
>> 
>> Hi all,
>> 
>> Does anyone on the list have some real-world stories to share about 
>> using RDF and its tools as a backend technology?  The company I work 
>> for maintains a database of metadata.  I'd like to explore using RDF 
>> instead of our current schemas.
>> 
>> For example:   I have a lot of data about books.  I'd like to translate 
>> the data into RDF/XML and dump it into an RDF database.  Then, taking a 
>> particular book, I'd like to query the database to extract related 
>> information like: other books by the same author, other books with the 
>> same subject code, etc.
>> 
>> My concerns relate to:
>> 1) Performance -- Right now we query the database using SQL.  Sometimes 
>> it is _very_ slow.  That's mainly because the data is distributed 
>> across tables and there are a lot of joins going on.  It seems like 
>> using RDF would allow us to use simple queries.
>> 
>> 2) Scalability -- Our triplestore would be HUGE.  I'd estimate 10-20 
>> Million triples.  Is that small or large in RDF circles?
>> 
>> 3) Productivity -- It's usually easier for me to envision creating RDF 
>> from our source data than massaging the data to fit into our database 
>> schema.  The same goes for when I'm extracting data - it seems like it 
>> would be much easier to express my query as a triple using wildcards 
>> for the data I want.
>> 
>> Any information will be helpful.  I'm interested in learning from other 
>> peoples' experiences.
>> 
>> Thanks,
>> Mark
>> 
>> ..oO  Mark Donoghue
>> ..oO  e: mark@ThirdStation.com
>> ..oO  doi: http://dx.doi.org/10.1570/m.donoghue
>

Received on Thursday, 15 September 2005 11:56:18 UTC