Re: sws matchmaker contest

Hi all,

We, the organizers of the SWS Challenge, recognize the importance of
the discussion in this thread and would like to take the opportunity
to explain the nature and goals of the SWS Challenge, which are
substantially different from other challenges that are more like
contests.

First,  we would like to invite all of you to have a look at our
current scenario for discovery [1] (as already pointed to by Emanuelle)
and the current CFP ([3], attached).

The current SWS Challenge is:

* A set of problems defined in terms of current technologies
  (WSDL description + documentation (language, diagrams, etc)
* Every problem set is backed by a running testbed (i.e. Web Services)
* Every problem set comes with a built-in evaluation: whether the
  correct messages were exchanged and whether the
  problem was functionally solved correctly. Wrt mediate, this
  means that the messages contained the correct data and for discover
  whether the correct services were successfully invoked.
* Our workshop also includes a peer-review methodology to determine
  the difference to move from one problem to another. This
  metholdology includes examination of paper claims against
  actual code examination in the workshop.

The result is that each participant is certified to have reached a
particular level of problem solving, and assigned a success criterion
indicating the flexibility of the approach.

We view discovery/matchmaking as a complex problem that requires much
research into fundamental aspects of representation, which of course
has a bearing on performance but is more involved (as pointed out by
Tommasso, Terry, and Abraham.) We are presenting a set of challenge
problems representing reasonable functionality, some of which is
difficult to achieve at all.

Moreover, we believe the challenge for the semantic research community
is to show that they can achieve useful functionality with less
programming effort than can be currently achieved. [2] Since C
programmers can do anything we (semantic researchers) we can do, with
the best performance, our goal is to show that we can increase the
*performance of the programmers* in meeting complex and changing
software requirements.

Research is required how to actually make good representations: a test
set given in some formal semantics specification language will be
biased toward some specific approach (and be partly provide already a
solution to the overall challenge). So our specifications are more
general and we leave it to the challenge participants to provide
formal representations. And we encourage them to use the "best of
breed".

Indeed our challenge is open to all. In fact, C and Java programmers
with non-semantic approaches are welcome as well.

Because of these differences in intent, we did not make a concrete
collaboration with Mathias' matchmaking workshop. However we believe
the community should not split. There should certainly be
synergy between these efforts.  We offer to use of our
open infrastructure for the matchmaking contest, in any way
that is useful.

regards
  SWS Challenge PC
  Charles, Michal, Holger


[1] http://sws-challenge.org/wiki/index.php/Scenario:_Shipment_Discovery
[2] "It's the Programming, Stupid", IEEE Internet Computing, "Peer to
Peer", May/June 2006.
 http://www-cdr.stanford.edu/%7Epetrie/online/peer2peer/w306.pdf
[3] http://sws-challenge.org/wiki/index.php/Workshop_Athens

-- 
Holger Lausen

Digital Enterprise Research Institute (DERI)
http://www.deri.org/

Tel:   +43 512 5076464
Mail: holger.lausen@deri.org
WWW: http://holgerlausen.net
-----------------------------------------------------------------
			Call for Participation
		 Semantic Web Services Challenge 2006
		    http://sws-challenge.org/
		  Third Phase - 10-11 November, 2006
			 Athens, Georgia, USA

The goal of the SWS Challenge is to develop a common understanding of
various technologies intended to facilitate the automation of
mediation, choreography and discovery for Web Services using semantic
annotations. 

This Challenge workshop seeks participation from industry and academic
researchers developing software components and/or intelligent agents
that have the ability to automate mediation, choreography and
discovery processes between Web Services.

Our approach is to test technologies on a set of common problems and
certify their functionality by a peer-review process. This process was
developed at the first phase of the workshop at Stanford University in
March and refined at the second phase in Budva, Montenegro in June.
The results of the first certifications are at
http://sws-challenge.org/wiki/index.php/Workshop_Budva

 The SWS Challenge is not a performance contest but rather a
functional certification process. You are invited to test your
technologies on some or all of these problems and be evaluated at the
workshop. Afterwards, you have permission to use our SWS Challenge
logo on your site and point to your evaluation results.
 
 To participate, sign up at the wiki, http://sws-challenge.org/,
and start working on the problems. You must demonstrate completion of
least one problem in order to qualify for the workshop.  Then, please
send a two-page description of your technology to Michal Zaremba
<michal.zaremba@deri.org> by 15 September. At the workshop, you will
present a full paper with your claims. Your paper and your code will
be evaluated by a group of workshop participants.

Received on Wednesday, 30 August 2006 08:14:36 UTC