RE: updated InfoGathering, proposing a portal as a solution - do you agree on a portal?

Great ideas Benjamin, but I'm not sure we (I) want to see more converters
and wrappers. The time it takes to do all of this could equal to the time it
takes to create something from scratch.

It will either have to be maintained by the W3C, a university, or by a
commercially driven organisation that stands to benefit in the long run.

In my opinion, option 1 isn't ideal because the W3C isn't technology
independent enough in this space - Ironic?! I'm talking about perception,
rather than fact. Option 2 isn't ideal because the process for change 'may'
be too laborious. Option 3, any takers?

Segala is doing this specifically for Content Labels. That is, a portal for
industry to create new codes of conduct. However, we need a more generic
Portal for the SW. Depending who collaborates and how much time they
dedicate, Segala will volunteer to build and maintain a portal. I can
dedicate a designer for brand creation and a developer for CMS and build.

Perhaps it could be funded through sponsorship.

Cheers
Paul



> -----Original Message-----
> From: public-sweo-ig-request@w3.org [mailto:public-sweo-ig-request@w3.org]
On
> Behalf Of Benjamin Nowack
> Sent: 19 February 2007 13:20
> To: W3C SWEO IG
> Cc: Leo Sauermann
> Subject: Re: updated InfoGathering, proposing a portal as a solution - do
you agree
> on a portal?
> 
> 
> 
> Hmm, looks a lot like the thing I'm planning to do with rdfer.com,
> and although I'm not sure about the fun I'd have to compete with
> myself, I could contribute some hours to the layout/design,
> front-end scripting or stuff like that.
> 
> For some (under)estimates: It took me a year (of evening hours) to
> build the 2005 platform for semanticweb.org. AFAIK, DERI is now going
> to rebuild the portal with tweaked standard tools (drupal, mediawiki,
> etc), but this doesn't seem to be much faster to do either. Setting
> up rdfer.com was easier as I'm using a SPARQL platform now, but
> coding the user-facing tools still takes time. Likewise with
> vocab mix decisions: Although the base vocabs are mostly there,
> it can take ages to agree on a usecase-specific subset (see e.g.
> the w3photo project as an example, or calendaring).
> 
> Given that there are obviously a lot of groups working on portals
> already, maybe we should first focus on providing an aggregated data
> stream, which would still be enough work (deciding on a recommended
> vocab mix, writing wrappers/converters, a service to add
> descriptions/sources, etc), and see if the market will build the
> UIs itself? This could perhaps lead to a solution with near-zero
> maintenance costs and could even encourage inter-service collabos
> (e.g. "piping" the mkbergman list through a DOAP xslt (Danny may
> have made one already), then adding ratings via revyu.com,
> aggregating those by a semantic bank, and making the result
> explorable via a SPARQL-based faceted browser etc.).
> 
> However (and putting conflicting interests aside), I think a
> portal-like info service would be great.
> 
> Benjamin
> 
> 
> 
> 
> On 19.02.2007 12:01:40, Leo Sauermann wrote:
> >Hi Sweo,
> >
> >before answering inline, the most important answer here:
> >About the manpower measured in fte (full-time-equivalent) that is needed
> >to make this happen.
> >
> >I agree with Ivan that this is the crucial point. I would guess that the
> >work to setup the page would roughly take 3 personmonths (including
> >first content, programming the rdf importer, webdesign, writing
> >advertisment newsletters, bugfixes and first user support)
> >So for a SWEO member (working one half-day per week, which is roughly
> >1/10 of a fte) this would take two years.
> >
> >After the website is up and running, it would take considerably less
> >effort to keep it running.
> >Compare to schemaweb.info, the guy making it has about 0.1 fte workload
> >on it.
> >
> >looking at this, It may be good to have at least three or four people
> >responsible and willing to invest some work within the next months.
> >
> >Also, external contributors can be asked. I did a similar web-project
> >collecting RDF tools with the Semantic Web School Austria (Susie is also
> >in contact with them regarding something completly different), and also
> >Chris Bizer and Richard Cyganiak (and people around them) are planning
> >to make a related website (a collection of how-tos), so it may be
> >possible for us to get additional support from outsiders - given that we
> >can attract them with a "win-win" situation (business speak, argh).
> >Which would mean some kind of attribution of "portal made in cooperation
> >by W3C and BLAH.com".
> >
> >
> >
> >Es begab sich aber da Ivan Herman zur rechten Zeit 17.02.2007 11:05
> >folgendes schrieb:
> >>>> "Which brings Swoogle [1] to mind.  One test of SWEO's thinking,
Ivan,
> >>>> is to ask them what Swoogle is not doing well enough or should be
doing
> >>>> better -- i.e. how this new proposal-under-development would improve
on
> >>>> Swoogle." [1] http://swoogle.umbc.edu/ (thinking of the possibility
of
> >>>> people just 'publishing' their data in some vocabularies, and let
> >>>> existing crawlers pick up that data)
> >>>>
> >>>>
> >>>>
> >>> Swoogle provides nothing, just this:
> >>> search.
> >>>
> >>> the portal will provide things like you would typically find on
portals
> >>> such as this:
> >>> http://www.xmlhack.com/
> >>>
> >>>
> >>
> >> I think the reaction came from your term of using 'crawler' on the
page.
> >> Actually, it may be an idea to use the crawler of swoogle...
> >>
> >If we go for the model that we only crawl URLs that were manually added
> >to the system, and we have to do a conversion from RDF to the content
> >management system (for example, from RSS to drupal), then swoogle may
> >just add a possible point of failure.
> >
> >reading a list of URLs and parsing the rdf should be manageable, tricky
> >part is change detection. But still, I think swoogle is not needed (if
> >we only import data from URLs registered to the portal).
> >
> >>
> >>> or here,  a mockup of what I have in mind
> >>> http://www.dfki.uni-kl.de/~sauermann/2007/02/sweomockup/
> >>>
> >>>
> >>>> In general, once we have a somewhat clearer idea of the architecture,
> >>>> and regardless of the details of the tools, we should certainly try
to
> >>>> make an assessment on long term on what this beast would lead to:
> >>>>
> >>>> - what type and size of traffic we would expect on such portal
> >>>>
> >>>>
> >>>>
> >>> ~ 5 people editing per day
> >>> ~ 100 visitors per day, if it explodes, up to 10.000 visits per day.
> >>> (depends if the semantic web is a success)
> >>>
> >>> about 5.000 people subscribing to the RSS feeds! (everyone from our
> >>> community) This could be heavy, but easy to cache.
> >>>
> >>>
> >>
> >> These are important figures. We should keep them before we try to
> >> convince any physical location (w3c or otherwise) to host the service.
> >>
> >ok, so my figures were just made up.
> >If anybody can guess better, please update the wiki...
> >
> >>
> >>
> >>>> - what type of update frequency is needed
> >>>>
> >>>>
> >>>>
> >>> from us, not much. We should attract people to edit data themselves.
> >>> One update per week may be enough.
> >>>
> >>>
> >>>> - what is the necessary manpower requirement to keep it up-to-date
> >>>>
> >>>>
> >>>>
> >>> probably 1 man, hopefully parttime is enough
> >>>
> >>>
> >>
> >> So, here comes the dirty question: who will be that person once SWEO is
> >> over? Do we expect the W3C staff to keep that?
> >>
> >> [Leo, sorry if I sound pushy with these remarks, but I think we have to
> >> be *very* clear with all these details before we move on. *Nothing*
> >> personal or against the project!]
> >>
> >answer above!
> >
> >>
> >>>> - what type of extra facility is necessary (eg: you say we would have
> >>>> some sort of a login facility for people making comments and rating;
> >>>> what type of extra facility would we need for that? OpenID, etc?)
> >>>>
> >>>>
> >>>>
> >>> normal signup: give me your e-mail address, enter a username/password,
> >>> you are in.
> >>> like any web 2.0 app. Nobody uses openid there.
> >>>
> >>>
> >>
> >> Let us put OpenId aside for the time being (I would love to see OpenId
> >> more widely used, but that is another matter). That means that the
> >> infrastructure you will have for the site will have to have these
> >> features as well. Just pointing that out.
> >>
> >yes, thats why I also want to reuse existing code. I thought to reuse
> >the user management part of a CMS, but Danny's idea to reuse a complete
> >CMS is better.
> >
> >best
> >Leo
> >
> >
> >--
> >____________________________________________________
> >- DFKI bravely goes where no man has gone before -
> >We will move to our new building by end of February 2007.
> >
> >The new address will be as follows:
> >    Trippstadter Straße 122
> >    D-67663 Kaiserslautern
> >
> >My phone/fax numbers will also change:
> >Phone:    +49 (0)631 20575 - 116
> >Secr.:    +49 (0)631 20575 - 101
> >Fax:      +49 (0)631 20575 - 102
> >Email remains the same
> >____________________________________________________
> >DI Leo Sauermann       http://www.dfki.de/~sauermann
> >Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
> >Trippstadter Strasse 122
> >P.O. Box 2080          Fon:   +49 631 205-3503
> >D-67663 Kaiserslautern Fax:   +49 631 205-3472
> >Germany                Mail:  leo.sauermann@dfki.de
> >____________________________________________________
> >Geschaeftsfuehrung:
> >Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
> >Dr. Walter Olthoff
> >
> >Vorsitzender des Aufsichtsrats:
> >Prof. Dr. h.c. Hans A. Aukes
> >
> >Amtsgericht Kaiserslautern, HRB 2313
> >____________________________________________________
> >
> >
> 

Received on Monday, 19 February 2007 13:49:27 UTC