W3C home > Mailing lists > Public > www-rdf-interest@w3.org > October 2000

Re: An RDF modelling engine

From: Alex Muc <alex.muc@utoronto.ca>
Date: Wed, 11 Oct 2000 10:19:04 -0400
Message-ID: <39E476D8.298C6B8F@utoronto.ca>
To: www-rdf-interest@w3.org
Hi,

I apologize for sending this message twice, but wouldn't you know it, my
internet connection went down about 5 minutes after I sent the message.  If
you are interested, you can try the link again, it should be accessible.

Thanks
Alex.

Alex Muc wrote:

> RDF Community,
>
> I'll cut to the chase to begin with and then develop my thoughts a
> little more a little later on in this message.
>
> I have developed a relatively simple and small web-based RDF triple
> modelling engine.  I have been using it to capture and modify RDF models
> (schemas and instances).  The models in my current system are about web
> pages and various other entities.
> You can take a look at the current state of system at the following URL:
>
> http://24.112.129.151:8080/metadata/index2.jsp
> I'd appreciate any comments or questions that anyone has about what I've
> done.  If anyone is interested, I'd consider open-sourcing the code as
> well.
>
> Now some background:
> I had the opportunity to work for the World Health Organization this
> summer in Geneva, Switzerland in their information management
> department.  While there I was given the task of evaluating various
> methods for helping them build, what they called, "a top-level
> navigation layer for their website".  The navigation layer was supposed
> to be XML-based.  Their current web-site has developed in a relatively
> ad-hoc way over the last 5 or so years and it is at the point where
> those who know anything about it consider it an unmanageable mess.  You
> can take a look at it if you want, visit www.who.int.  Anyways, the
> project I was working on was building this "top-level navigation layer"
> which could be put in place on the site as an interim measure while they
> considered what to do next to build a better website.  The project never
> made it to completion, and never made it past what I would call a
> proof-of-concept and that is what the above link will show you.  Sorry
> for all this background info but I figured a little contextual
> information may help.
>
> The engine itself:
> - It is essentially an RDF triple editor.
> - It uses Sergey Melnik's RDF API as a base set of objects.
> - It is written in Java and JSP and makes some use of Cocoon (a
> web-based, XML publishing engine, visit xml.apache.org/cocoon)
> - It uses an SQL back end for persistence.  The data model is the one
> proposed by Brian McBride on Sergey Melnik's "Storing RDF in a
> relational model" page at
> http://WWW-DB.Stanford.EDU/~melnik/rdf/db.html.  Except it doesn't use
> the SQL "views" feature.
> - It started off as a simple RDF triple editor but as development has
> progressed more and more of that has been pushed into the background.
> - The system has the ability to import arbitrary SQL data sets.  This
> was a requirement for what I was doing because WHO has a lot of
> metadata-like data lying around in various SQL servers which they
> thought should be in a system such as this.  This basically represents
> an SQL table as a class and the rows as instances, but it has the
> ability to do parent-child (look-up table) relationships and things like
> that.  You can try this out if you want.
> - There is a simple RDF Class browser and editor which is completely
> dynamically built and driven depending on the resource that you are
> working with.
> - There is a simple example, with realtively random data, of the
> "top-level navigation layer" which the project was ultimately supposed
> to produce.  This is really an example of "acting on" the RDF data of
> the system.  Much of the work I had been doing was to capture RDF data,
> this example was a "use" of that data.
> - By the end of summer the system I had developed contained about 10,000
> triples, 1000 resources and 2000 literals.  It was running very happily
> with a MySQL database backend on a PIII 550.
> - The user interface is very ugly at this point.  I focused my efforts
> on making it do something useful and never got around to making it look
> "pretty".  I'm hoping you'll all be able to look beyond that.
>
> If you've made it this far you may actually be interested in this
> project.  I am currently back in school finishing my degree and as a
> result I do not have too much time to work on this.  That being said I
> would happily open-source what I've got so far so that others could work
> on it.  I also have some computing resources which could be thrown at
> the project to actually provide it a home.
>
> Feel free to poke around in the system and add/delete/modify any of the
> data (I have backup copies).  You can try the SQL import facility if you
> want, and can figure it out, it's pretty self-explanatory.  You can add
> classes and instances 'till your heart is content.  You can also add
> properties to classes dynamically and they should appear in the
> instances immediately, although, as I recall, I was having some caching
> problems with this.  If you make changes and certain things don't seem
> to be appearing correctly visit this page:
> http://24.112.129.151:8080/metadata/dbStatus.jsp and click on the "Clear
> All Local Caches" link.
>
> Some interesting starting points:
> http://24.112.129.151:8080/metadata/resource.jsp?rID=26 - Metadata for
> the WHO home page.
> http://24.112.129.151:8080/metadata/resource.jsp?rID=364 - SQL import
> specification described in RDF.
> http://24.112.129.151:8080/metadata/resource.jsp?rID=25 - Metadata about
> Steve, the creator of the home page.
>
> Long-term stuff:
> As a proof of concept I think this engine is really quite useful.  It
> pulls together a bunch of RDF work done by other people -- the W3
> recommendations, Sergey Melnik's API, a persistence model implementation
> -- and actually demonstrates something which is borderline-useful.
> Other projects that I know of along these lines are the Protege project
> developed at Stanford and the WRAF project.  I would see the work I
> developed being something like the Protege project in the long-term,
> although RDF and web-based from the ground up.  I haven't experimented
> with WRAF so I can't compare it to that.  My impression is that RDF was
> initially developed for webpage metadata, but it is sufficiently general
> to model entire systems (data, objects, interconnects, etc.) and I think
> a tool which could do that would be very useful.  I may be wrong though.
>
> Anyways I hope some of you have a few minutes to try it out.  Again, any
> questions or comments are welcome.
> Thanks
> Alex Muc.
Received on Wednesday, 11 October 2000 10:20:37 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:51:44 GMT