W3C home > Mailing lists > Public > public-sweo-ig@w3.org > July 2007

Re: data integration - how to make a portal - SUMMARY

From: Kingsley Idehen <kidehen@openlinksw.com>
Date: Wed, 25 Jul 2007 11:26:56 -0400
Message-ID: <46A76BC0.4060009@openlinksw.com>
CC: W3C SWEO IG <public-sweo-ig@w3.org>

Leo Sauermann wrote:
> Hi SWEOs,
> a small summary to the Portal-Discussion happening on the mailinglist 
> and in todays telco:
> Kingsley put it in very good words in the telco, saying that we should 
> focus on the deliverable we have at the moment: a SPARQL endpoint 
> which gathers data about the semantic web. (see wikipage [HowtoPublish])
> Based on this deliverable, there is a second step that people who run 
> portals can take this data and integrate it into their portal (see 
> wikipage [PortalPlans] for our own planned portal)
> But anybody who runs already a portal, can integrate the data. And we 
> should still continue experiments on how the data can be visualized, 
> for example by copying it into a longwell browser.
> Also, a colleague of mine offered to use his e-learning portal 
> software ALOE for SWEO, and Susie found somebody from California (?) 
> who may run such a portal.
> Anyway, a Portal will drag away much of our resources if we do it 
> ourselves (see [PortalPlans])
> We need to setup a server, write software, write a synchroniser or 
> ontology alignment stuff for the data, write, code, work, etc.
> It also appeared to me, that us SWEO members may not make this happen, 
> until somebody really says: I am going to do that. There is a chance 
> we can convince external people to do it (for a list of possible 
> candidates, see [DataSources], section "Information on Communities and 
> Outreach Organizations".
> For example, the Semantic Web School Austria guys are such people we 
> may be able to convince.
> [HowtoPublish] 
> http://esw.w3.org/topic/SweoIG/TaskForces/InfoGathering/HowtoPublish
> [PortalPlans] 
> http://esw.w3.org/topic/SweoIG/TaskForces/InfoGathering/PortalPlans
> [DataSources] 
> http://esw.w3.org/topic/SweoIG/TaskForces/InfoGathering/DataSources
> (Note to Kingsley: you wondered why longwell can't use the SPARQL 
> endpoint. In my knowledge, conventional longwell needs a whole dump of 
> the data to work, it was not possible to make longwell run with a 
> sparql endpoint, for the linked data project there was a hack by Chris 
> and you guys, but I don't know the details, and probably Danny doesn't 
> either, if Longwell can work on top of virtuoso. Longwell keeps its 
> own indizes and lucene stuff, it has always been standalone and 
> not-integrate-with-your-server.
> Does Longwell now run directly on virtuoso? I remember Chris Bizer 
> saying that they wnated to do that and that you guys implemented 
> capable count() methods for longwell.....)
Yes, we've implemented SPARQL Aggregates [1] [2] for a while now. This 
should, if used, enable Longwell perform faceted browsing against large 
data sets (even remotely). But remember, Longwell needs to talk SPARQL 
first (local or remote).


1. http://docs.openlinksw.com/virtuoso/rdfsparqlaggregate.html
2. http://www.openlinksw.com/weblog/oerling/?id=1162

> It was Kingsley Idehen who said at the right time 22.07.2007 16:09 the 
> following words:
>> Danny Ayers wrote:
>>>> I am not convinced that Longwell will deliver the ease of 
>>>> production you
>>>> espouse re. this portal effort. That said, I am extremely willing 
>>>> to be
>>>> convinced otherwise, by seeing it accelerate the portal effort :-)
>>> Would I be right in thinking there's already a pile of data gathered?
>>> (Sorry, not been following very well). If so, if you can point me to
>>> an RDF/XML dump, I can have a play with Longwell on it, see what it
>>> looks like...
>>> Cheers,
>>> Danny.
>> Danny,
>> See:
>> 1. http://www.w3.org/2001/sw/sweo/public/Info/ (main page)
>> 2. http://esw.w3.org/topic/SweoIG/TaskForces/InfoGathering/WishList
>> Which will reveal:
>> 1. SPARQL Endpoint
>> 2. iSPARQL QBE Endpoint
>> 3. Sample Queries for Listing Graphs (in Dynamic Linked Data Page and 
>> SPARQL Query Definition forms)
>> Note: The server will protect itself from a CONTRUCT aimed at dumping 
>> the entire Data Store via SPARQL. The plan was to produce a separate 
>> RDF Archive for al the contents  of this store (as per "Open Data" 
>> best practices).



Kingsley Idehen	      Weblog: http://www.openlinksw.com/blog/~kidehen
President & CEO 
OpenLink Software     Web: http://www.openlinksw.com
Received on Wednesday, 25 July 2007 15:27:05 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:28:57 UTC