Re: R: sws matchmaker contest

The overall goal of SWS is to enable the automatic and dynamic service discovery, matchmaking, composition and invocation. 

My criterion for evaluating whether an approach can help realize the overall goal of SWS is - whether such an approach enables (or is relevant to) the dynamic invocation of Web services. If the last and final goal of SWS cannot be realized, I have to say there should be something in-appropriate.

The dynamic invocation is already characterized as that "without any reprogramming, a software system could have the flexibility to use various services that do the same kind of job but have different APIs" (Burstein 2004).

Tommaso just assumed that "Once the ranked list of WSs is returned, a system has a criterion to automatically choose different WSs to be invoked in case of failure or unavailability of a WS". I am concerned about once his system automatically choose a different WS that do the same job, how his system can automatically invoke this "similar" WS without ANY reprogramming. 

Regards,

Xuan


>>> Abraham Bernstein <bernstein@ifi.unizh.ch> 08/25/06 10:00 AM >>>

Dear all

Tomaso's and Terry's comments have prompted me to write down some of my 
own thoughts.

I believe that there is a lot of benefit to look at matchmaking in its 
greater context. As Terry points out there are many research questions 
buried in how to best support users in (i) query formulation, (ii) 
result set understanding or re-ordering, and (iii) query refinement. 
This is an area that is dear to my heart and I have spend quite a bit of 
my research time on it over the passed two years.
The same is true in the about the second field that Terry points out, 
where no human is in the loop.

Nonetheless, I strongly believe that the development of a benchmark 
should include tasks that disentangle the overall loop (with the user or 
client program): in other words "pure" retrieval/matchmaking  tasks. The 
reason is that I believe that there are many different research tasks 
that should be evaluated separately AS WELL AS in an integrated way.
A quick brainstorm of tasks includes at least (I am probably forgetting 
quite a few here):

1) Given a set of queries and a collection of services,
       a) how I can I find/rank the best matching ones?
       b) how fast can I find a plausible one?
       c) how do these matching procedures scale?
       d) what is a suitable query language
       e) what is the semantics of s partial match?

2) Given a user and her/his need,
      a) how can i help her/him put together a suitable query?
      b) how can i help her/him understand the returned answer set (and 
possibly its ranking)
          and the trade-offs between the elements?
     c) how can i help her/him improve/refine the query?
     d) what is a suitable query language/formalism/...?
     e)   ...

3) <the same thing for agents/programs>....

4) <tasks containing combinations of problems in 1, 2, and 3...>

Each of these tasks is worthwhile and should have some type of 
test-collection.... eventually. But the question is where to start. I 
would start with the first task, since it is the one we all have the 
best understanding of (at least I believe so).
We should then proceed with 2 and see what the multi-agent community has 
done about 3...

So I am in great agreement with Terry and Tomaso, but just wanted to 
point out that it may help to start with little steps while keeping the 
overall goal in mind :-)

Cheers

Avi


Matthias Klusch wrote:
>
>
> dear all,
>
> thanks for the very useful feedback and hints to
> ongoing matchmaker development work so far!!
>
> one particular consequence of some of terry's notes,
> with which i agree, in essence, would be to build up
> a large sws retrieval test collection including domain(-independent)
> sws and user queries with subjectively defined relevance sets.
> "Subjectively" implies to predominantly involve the potential
> business domain service users in the iterative development process.
> the influences the development of
> "pragmatically usable" sws matchmakers with "reasonably" good 
> recall/precision performance on such a collection.
>
> i admit that this process does not, however, also automatically
> lead to any solution of the problems related to "How can the user
> understand *why* the matchmaker returns what services?"
> as Terry also noted in his last email.
>
> but it might be worth to start with the "(user) requirement analysis
> phase for sws matchmakers, brokers, search engines". this is what I
> would expect to be a joint complementary action at the matchmaker
> contest meeting, basically triggering (or prepare to trigger) the same
> kind of iterative user-research feedback driven development process as
> happened with TREC a few decades ago.
>
> cordial regards, matthias
>
> __________________________________________________
> Dr. Matthias Klusch
> German Research Center for Artificial Intelligence
> Stuhlsatzenhausweg 3
> 66123 Saarbruecken, Germany
> Phone: +49-681-302-5297, Fax: +49-681-302-2235
> http://www.dfki.de/~klusch/, klusch@dfki.de
> __________________________________________________
>

-- 
-----------------------------------------------------------------
|  Professor Abraham Bernstein, PhD
|  University of Zürich, Department of Informatics
|  phone: +41 1 635 4579 
|  eMail: bernstein@ifi.unizh.ch 
|  web: www.ifi.unizh.ch/~bernstein 
|  mail: Binzmühlestrasse 14, CH-8050 Zürich, Switzerland 

Received on Friday, 25 August 2006 14:46:35 UTC