W3C home > Mailing lists > Public > public-sweo-ig@w3.org > March 2007

Re: [Information Gathering] next steps: syndication, good weblocation

From: Lee Feigenbaum <feigenbl@us.ibm.com>
Date: Sat, 31 Mar 2007 18:17:48 -0400
To: public-sweo-ig@w3.org
Message-ID: <OFC7B41CA1.736915D7-ON852572AF.007995E3-852572AF.007A7A7B@us.ibm.com>

Kingsley Idehen wrote on 03/31/2007 05:27:47 PM:
> Lee Feigenbaum wrote:
> > Kingsley Idehen wrote on 03/31/2007 11:45:01 AM:
> >> Lee Feigenbaum wrote:
> >>> Kingsley Idehen wrote on 03/30/2007 09:51:21 AM:
> >>>> Leo Sauermann wrote:
> >>>> 
> >> Lee,
> >>
> >> I don't think Leo is as far apart from your view as the commentary my 

> >> 
> > imply.
> > 
> >> As you know, there are a plethora of routes to building intuitive 
> >> front-ends to RDF once there is you have SPARQL Endpoints. 
Personally, I 
> >> 
> >
> > 
> >> would suggest (the position I've always held) that we all build 
> >> front-ends to the SWEO aggregated RDF Data Sources. Ultimately, the 
> >> 
> >
> > 
> >> should also apply to the actual server collections, we should have 
> >> server mirrors from the likes of IBM and Oracle along the same lines 
> >> what OpenLink is offering (via the Virtuoso base RDF store). We have 
> >> practice what we preach at every turn, loose federation of RDF Data 
> >> an essential part of this bigger picture :-)
> >>
> >> Once we have the SPARQL Endpoint live, please proceed in the manner 
> >> you've suggested re. Exhibit. It would also be nice to see a Boca 
> >> host of the RDF data also.
> >>
> >> To conclude, I violently agree!
> >> 
> >
> > Hi Kingsley,
> >
> > While I'm glad that you think you violently agree with me, your above 
> > comments don't reflect my opinions at all... so let me try again.
> >
> > >From an _education and outreach_ point of view:
> >
> > I don't care *at* *all* where the data is hosted. I don't care at all 
> > owns the domains, whose software runs the store, whose SPARQL 
> > are used, or whether the data is completely decentralized and 
federated or 
> > whether its aggregated into a single store. To me, these are details, 
> > results of which have almost no effect on the success of an education 
> > effort around Semantic Web information resources.
> >
> > I also don't care whether there are 4 or 5 or 10 or 1 user interfaces 
> > consume the data; instead, I care that there is *one* *good* and 
> > accessible way of getting at the data, that doesn't require the 
> > to know anything at all about the Semantic Web--or to have any assumed 

> > level of technical competence--to benefit from.
> >
> > What I care about and think is important for our education and 
> > efforts is for us to do the work to identify what the cream of the 
> > SemWeb information resources are, and then organize them based on 
> > ones are most useful for which types of people. To do this, I believe 
> > we need to augment the existing information resources with:
> >
> > a/ some way to identify the best (this could be digg.com-style 
> > google-style rankings (don't think we need that level of complexity), 
> > even just simple "best of breed" flags)
> > 
> Lee,
> What stops any of the above being produce via an (X)HTML page with 
> dynamic binding to the relevant sources? With the relevant flags? What 
> stops the flagging or any other categorization from being part of the 
> data source? We do have Review Ontologies for instance with slots that 
> enable us produce review pages as demonstrated by: http://revyu.com .

OK, I'll try one last time, since apparently I'm completely failing to 
communicate my message: Nothing stops any of the above being produced with 
any of a number of technologies/architectures, including "an (X)HTML page 
with dynamic binding to the relevant sources." 

But, in my opinion, the architecture and technologies use do not matter at 
all (or matter very little) if we are not going to take the further steps 

1/ generating the ranking/rating/flagging data
2/ generating the facet data on audience and domain/industry
3/ produce a user-friendly Web interface to the data

> > b/ appropriate predicates and editorial work to associate information 
> > resources with the appropriate audience that each is aimed at (both on 
> > technical capability level and on a industry/domain level)
> >
> > 
> If we can associate RDF data with preferred display controls (as 
> demonstrated by Tabulator and our RDF Browser), then why can't we do 
> exactly the same thing as part of the production of a portal where the 
> filtering is based on the target audience where the data for filtering 
> is in the source data?

We can't do it if people don't want to do it as part of SWEO's work. 
Kingsley, this isn't a technical argument I've been making (it never has 
been!). It's an argument as to the work SWEO should undertake to 
successfully use the data to accomplish education and outreach goals. Leo 

And we can then encourage independent 3rd parties to aggregate the data 
and provide the interface

I asked Susie for her thoughts about this and she proposes exactly this, 
stick to information gathering.

Making a web interface that is user-friendly (especially newbie friendly) 
and is managed by W3C is tricky, because W3C is a technology 
standardization body and not an education body.

...which is what I've been disagreeing about all along. Your messages have 
focused on technology details, which to me seem completely orthogonal to 
the messages I've been sending.

Received on Saturday, 31 March 2007 22:17:53 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:28:52 UTC