- From: Jonas Liljegren <jonas@liljegren.org>
- Date: 29 Sep 2000 09:50:10 +0200
- To: Rahul Dave <rahul@reno.cis.upenn.edu>
- Cc: www-rdf-interest@w3.org, rdf@uxn.nu
Rahul Dave <rahul@reno.cis.upenn.edu> writes: > > Wraf implements a RDF API that hopes to realize the Semantic > > Web. The framework uses RDF for data, user interface, modules and > > object methods. > > Very intersting. > > I find the usage of RDF for object method and property representation > particularly intriguing. It can lead to the creation of an object model > with replaceable implementations of web services and property accessors. > > I rambled a bit about this > in http://www.egroups.com/message/decentralization/237. Yes. That's what we have done here. > The key point there is that an object model defined in RDF is extensible, and > also queryable. The module is called RDF::Service because it's intended to function as a service deamon for requests. The key point here is that you dynamicly can plug in new interfaces. Each interface can define new methods for resources of specific types. Each interface can be specificaly designed for a certain information source and/or provide specialized methods for influencing something. There will be general and specialized interfaces against other RDF services and internet content. That means that a method call can be sent from one service to another. The interfaces register the things they handle. The Service dispatcher then sends each call to the right destinations. And since the interfaces, modules and functions in them self are resources, they can be transparently imported then they are needed. Lets say that a service tries to connect to a interface that doesn't exist on the server. The interface can be found and automaticly downloaded and stored in the locale chache, ready for execution. The same goes for new methods for a specific interface. This will result in completly transparent software upgrades. > So Newer methods, or services can be added by web sites > to user's existing objects..and metadata can be combined across applications. Yes. And with a little intelligent divition of the service work, a complex system could be distributed across several servers. > >It uses interfaces to other sources in order to > > integrate all data in one enviroment, regardless of storage form. > > Caching stored metadata obtained from the multiple sources will be needed > to have efficiency. Any thoughts on whether a generic RDF db like RDFDB or > some sort of object database which captures the model at a coarser granularity > than individual triples is favorable? The Wraf project tries to gain a little efficiency by optimizing for the common cases. RDFDB seems to be fast in it's simplicity. But we have a custom DB interface to gain flexability. From an earlier message in the Wraf mailinglist archive: What RDFDB gives: * import/unimport RDF xml-files * insert/delete triples * a basic query language What RDFDB misses: * It does not bind statements to models * It does not make the 'fact' shortcut for reified statements * No alias mechanism * No container optimization * No support for uri prefixes * No support for blobs * No URI asignment for statement * No RDFS methods * No checking of duplicate statements The experience with the RDF Schema editor http://jonas.liljegren.org/perl/proj/rdf/schema_editor/ made us want to optimize for the most common case. That is: the realy heavy use of types and subClassOf and the constant use of label in the development enviroment. It does make it easy to import RDF data and to make queries. But it will not be a good place to store the bulk of the data from Wraf. So: I would like to have an interface to RDFDB. It may be a little easier to do some queries. But it doesn't cut back on the work we have to do. RDFDB is too limited to be used as the main storage interface. It should only be used with simple static data. -- / Jonas - http://jonas.liljegren.org/myself/en/index.html
Received on Friday, 29 September 2000 03:47:06 UTC