W3C home > Mailing lists > Public > www-ws-arch@w3.org > July 2002

RE: Asynchronous Web Services

From: Christopher B Ferris <chrisfer@us.ibm.com>
Date: Sun, 21 Jul 2002 23:27:30 -0400
To: www-ws-arch@w3.org
Message-ID: <OF9E709677.06EB58BC-ON85256BFD.006DA97F@rchland.ibm.com>


Great discussion, some comments below.


Christopher Ferris
Architect, Emerging e-business Industry Architecture
email: chrisfer@us.ibm.com
phone: +1 508 234 3624

                      "Newcomer, Eric"                                                                                            
                      <Eric.Newcomer@io        To:       "Paul Prescod" <paulp@ActiveState.com>, <www-ws-arch@w3.org>             
                      na.com>                  cc:                                                                                
                      Sent by:                 Subject:  RE: Asynchronous Web Services                                            
                      07/21/2002 03:43                                                                                            


Ok, I think we are sort of getting somewhere here.

-- The Expedia example is not a Web service since it still involves human
interaction.  What we want is Expedia to automatically book flights based
on calendar choices in a PDA.  Expedia needs to become a "service" to which
one can send messages across the Web, not a site that simply serves pages.
Would you argue that the Web services APIs that Google and Amazon.com have
recently developed and published are not useful since they expose a
different paradigm than interacting with those sites via URI based method
names and parameters?

I don't think it is as black and white as you describe here. Consider that
there may be
value in providing a service that can support both human (via a user agent)
and software agent.
I'm thinking about the poor schmuck that has to write the code and even
more of the
beleaguered IT manager/CIO who wants to reduce complexity, cost, and
redundancy in the systems she manages.

Sure, there will likely be some intermediary process between the user agent
and the
service to aggregate, transform, etc. so as to make for a more
suitable/usable/accessible experience
for the end user, but the software components that actually perform the
functionality shouldn't have to be different.

I believe that the same will apply w/r/t support for a peer software agent,
running on
some beheamouth server as contrasted to some (highly) constrained (user)
agent running on a
PDA, cellphone, wearable, or embedded in a toaster or a lightbulb.

-- Joining applications by joining their web server mappings is inefficient
compared to joining them directly, although I agree the 0(N) problem is the
same one we are trying to solve with SOAP and WSDL.  Also there is no way
to represent middleware semantics.

Of course the additional layer of mapping/abstraction is less efficient,
but efficiency is
a relative term. It is more efficient to suffer the mapping/abstraction
electronically than
it is to introduce physical representation and human-interaction as the
equivalent of that
mapping between domain/enterprise boundaries. (e.g. sending a fax, and
having that transcribed
into one's system by a clerk). This is what many businesses have to deal
with today
and they would like to gain an increase in efficiency as that has a direct
on their bottom line.

And then again, this debate rages everytime we introduce a new language.
Assembler v C,
C v C++, C++ v Java, etc.. Eventually, the compilers/tools improve, the
hardware improves, and the
debate fades into the sunset because it becomes less, and less, relevant.
The new languages
are often more efficient in other, often less tangible, ways that also
effect the bottom line
which is really what matters at the end of the day in many if not most
cases (programmer
efficiency, effectiveness, and accessibility).

-- Middleware systems share fundamental information requirements that allow
an abstract messaging system to be defined that spans them -- the service
name, data (whether in arguments or text blobs), security context,
transaction context, and session/user context. We at IONA have successfully
bridged COM and CORBA, J2EE and CORBA, and CORBA with CICS and IMS.  We
have implemented an abstract runtime kernel that is capable of plugging
multiple transports.  This problem is solvable.



-- One fundamental area of disagreement here seems to be whether or not
it's appropriate to encapsulate the method name within the data, or within
the message (and perhaps that also brings up another fundamental area of
disagreement, whether we are dealing with messages or documents).  I am not
sure it makes sense to require everyone to expose every method they want to
integrate using a URI.

Hmmm... I think that there's room for both. One is merely an abstraction of
the other.
Depending on usage, one may be preferable over the other.

Consider the following example. A business wants to offer a service such
that its customers
can retrieve their product catalog. Is there really any benefit to exposing
the full
signature of the service as methods and arguments? I'd rather think not, at
I'd rather not expose a bunch of complexity to my customer. My aim is to
make it as easy
as possible to do business with me. In this particular case, it is merely a
retrieval of
a representation of the product catalog resource.

Additionally, why send the whole catalog (all the details), if only a small
subset is
ever queried for the details of particular items? Or, it may be that the
query is
satisfying a user-interaction through a web interface, in which case there
is a certainty
that all of the details would be unnecessary. Maybe the customer is only
concerned with
a small subset of the whole catalog, such as a product family.

The point here is that there are two ways to think about this. One, as a
well defined
(and possibly complicated) interface to a query, or queries, that return
subsets of the product catalog based on the arguments passed to such an
Of course, that means that there needs to be some domain knowledge as to
what the ranges
of valid values might be for the arguments to the query, etc. For the
end-user behind a
user agent, much of this would all be hidden behind a Web interface
comprised of links
and HTML Forms, etc.

The other approach would abstract these queries by assigning URI values to
the equivalent
query+args as a resource and baking these into the data as links that can
be traversed.
This would allow for the same exposed "interface" to be used by the
end-user as by the
software agent and neither needs to understand the arcane idiocyncracies of
API offered by the service.

Of course, with something such as the product catalog service we describe
here, it would
be impractical to actually hit the database each time some customer
requested it. It would
likely be a cached result that was returned so as to minimize the stress on
the system
that actually maintains the catalog. Just as a database caches results of a
query for
performance optimization, a Web service will (likely!) need to do the same.
Seems to me
that because of the performance overhead, you don't want to reserialize the
results as XML each time the data is requested so caching at the Web level
like a reasonable thing to do.

As CS&N would say: "and I feeeeel, like I been, here before... and it makes
me wonder..."

Of course behind all this is still the fact that the term "Web service"
still means different things to different people, and that the term can be
very, very broadly used.

Indeed! Now that I'm free to express an opinion, it may seem as if I'm
arguing the
REST case, and in a sense, I am. However, I will add that I don't believe
that it is
THE answer to all things. There are some things that it can do very well,
and others that
it is not well suited for as is clearly stated in Roy's thesis. There's
also the practical
versus the theoretical to be addressed and there's always the stuff that
came before with
which to contend... it simply will not go away and we cannot disregard it
no matter how
much we'd like to.

more below.

More below.


-----Original Message-----
From: Paul Prescod [mailto:paulp@ActiveState.com]
Sent: Sunday, July 21, 2002 2:15 PM
To: Newcomer, Eric; www-ws-arch@w3.org
Subject: Re: Asynchronous Web Services

"Newcomer, Eric" wrote:
> Paul,
> I'm interested in clarifying some things in this message.  Are you
> suggesting that a database be exposed to the Web as a resource?  Or
> a CICS transaction?

Generally you would not want to expose this "primitive" of a resource
externally (for all of the usual reasons of encapsulation).

** But this seems to be exactly what is suggested by using a URI for each
method exposed as a Web service...

> In other words, is the suggestion that a URI point to these type
> of legacy systems directly?  Or is it assumed that some sort of
> indirection or "mapping" occurs between the Web and these systems?


** Then what is wrong with the mapping phase including the method name?
Other than violation of REST, that is?  I have heard the argument that it
doesn't scale, and I understand that it doesn't, but I am also unsure it
needs to, since Web services are not the same thing as Web page
interactions (and perhaps you are saying that they are, and this is also
part of the disagreement).

Yes, but method name implies RPC and I think that it is clear that
not all Web services are RPC-based. Document-centric Web services
are also an important aspect and these will not necessarily have
method names associated with them *in* the message, although there
may be some manner of mapping to a "method" performed upon receipt.

My point is that document-centric Web services might benefit from
the style of Web page interactions, especially w/r/t the notion
of linking.

> You mention how Web sites work today by mapping into legacy
> systems.  There are well established mechanisms for calling
> from Web severs to back end systems via application servers,
> CGI scripts, etc.  But interacting with these systems
> requires human interaction with an HTML form of some kind.
> I'm really not clear on how you propose to accomplish this
> automatically, machine to machine, without a browser, if
> there isn't some definition of the mapping between the
> legacy applications and the Web.

I don't follow you. The HTML form does not do the mapping. The CGI
script or servlet or cold fusion page or... does the mapping. This could
continue for web services. You would of course replace the HTML with
XML. Let me give a very concrete example. Microsoft could convert
Expedia to return XML instead of HTML to XML-accepting clients. HTML
links become XLinks. "Clients" step from page to page following XLinks.
Microsoft documents the structure of the pages as XML Schemas. They
could use a language like WRDL to strongly type declare the operations.
(or not...sometimes a prose description is enough)

** Rather than place an orders with Expedia via the Web page, I want to
send a message to Expdia from my calendar program when I decide the dates
and times I want to plan a trip.  I also want the calendar program to
reserve a car, reserve a restaurant table, reserve a hotel, etc.

In what sense would this not be a "web service"? Although it is not
representative of all web services, I think it is representative of a
large class of them.

** It still assumes human interaction.

not sure that I agree that there *must* be a human involved.
The software agent could be performing a heuristics match
on the information returned from selected (iterated?) links
or on metadata associated with the links. It stops when it
finds something that matches. Following the links is in my mind
not much different than deciding to invoke yet another method
with some arguments derived from the data at hand, it may just
be a simpler approach to retrieve a representation via a URI
than populating a method/procedure call.

Not really much different than lazy instantiation on an iterated

> A very important question lies within this area of debate.  Is it
> the responsibility of the "web server" to map to any and all
> middleware systems, database systems, and packaged applications?
> That's pretty much the case today.  Or can the responsibility
> be moved to the middleware systems, database systems, and packaged
> applications to do the mapping?

If these applications accept connections and those connections are made
using standard Web protocols (HTTP, SOAP over HTTP) then *they are web
servers*. The choice of whether to build on Apache or from scratch is an
engineering and topology decision.

** Ok, what I meant was Web servers as they exist today.  The implications
of whether or not Web services specifications require a re-write of Apache
are almost as significant as whether or not they require a re-write of
HTTP.  We are not proposing the one, let's be sure we don't propose (even
implicltly) the other.

>  ... That's the idea of SOAP and WSDL.

I think that the point of SOAP and WSDL is to enable organizations to
integrate their information systems within the organization and across
organizational boundaries. Preserving existing investments is important
but secondary. And I am much more interested in helping people preserve
their investment in domain-specific software (packaged applications,
custom applications and database schema) than in generic middleware and
database software.

** Preserving existing investments cannot be secondary or the technology
will not succeed.  The Corba Object Transaction Service could not re-invent
Oracle or SQL Server, neither can Web services re-invent existing

Clearly, it is the case that investment in "legacy" systems must be
in fact extended by Web services. That is why I maintain that there is room
for both styles, those that effectively extend the reach of existing
and new applications that can leverage the technology to a greater extent.

> I don't think anyone is suggesting that we propose
> re-implementing existing database management systems,
> transaction processing monitors, application servers,
> object request brokers, messaging oriented middleware
> systems, etc. to adapt to REST, or am I wrong about that?

It depends on the extent to which those things want to be considered
"Web tools" or implementation infrastructure that is gatewayed to the
Web. I don't think a SQL database needs to be considered a "Web tool"
though Web supporting interfaces are sometimes convenient. Application
servers already support REST! The app server software category rose to
prominence as the tool you use to do the mapping between legacy systems
and "the Web architecture".

** Yes, but application servers are fundamentally 3-tier structured
architectures designed for fanning in or multiplexing a large number of
clients sharing a single resource such as a database.  They are not well
suited architecturally to loosely-coupled message oriented interactions
(despite the support in J2EE 1.3 for Mbeans, JMS, and the like), and this
gets us back to the orignal point of the thread.  Web services, like the

Not sure I buy that at all. It may be my bias towards messsage-oriented
but I never really thought that an appserver was much use until support for
was an integral feature:)

itself, are better suited to asynchronous message oriented interactions
than to RPCs, and the asynchronous messaging paradigm grew to prominence
precisely because of its benefits to integration -- the level of
abstraction is better suited to bridging multiple technology domains than
RPCs are.


> If we are not suggesting that, we have to be defining an
> abstraction that allows messages to be exchanged across these
> types of systems.

Let me provide an example. CTO recognizes that his sales automation
systems cannot speak to his accounting systems so he doesn't know when
sales calls translate into purchases. The sales automation system is
built around a CORBA ORB. The accounting system is built around MOM.

The CTO goes to a generic integrator and asks for a solution. The
integrator might say: "Sure, I can write some glue code to integrate
those systems. I'll insert another middleware system and a bunch of

** Sorry but I have to correc this statement ;-).  The integrator would
just use CORBA for the integration as 65% of our customers already do.  No
need for another middleware system. The motivation and fundamental use case
for CORBA, as for Web services, is application integration.

The CTO goes to a REST advocate and asks for a solution. The REST
advocate says: 'Buy two Web app servers and write code that maps each
into the Web data model. Make every logical object "sales call",
"customer", "purchase" into a resource. This might be more expensive
than the solution above but when you want to integrate a third and
fourth and fifth system, you have an O(N) integration problem, not
O(N^2)." The REST solution will not standardize everything, but it will
standardize primitive message exchange patterns, addressing model and
allowed operations. What it does not standardize is information
representation. Purchase orders will still use a different vocabulary
from sales call logs and when new systems are brought in, their
representations must also be integrated.

** Again, CORBA already solves this problem using IDL.  There is no 0(N^2)
problem in this scenario.  You may not like the solution, and that's your
perogative, but we can show you literally thousands of successful
implementations of this solution.

There's no O(N^2) problem for the standardized interfaces, but there
is when bridging applications with similar if not identical purpose but
interfaces (the miriad homegrown systems, the miriad versions of COTS
that can't even talk to their predecessors, much less their competition,

** Aside from that, however, this scenario is remarkably similar to what
we're trying to achieve with Web services, which to me provides a more
broadly adopted, more abstract, and more powerful integration solution.
Because of this I think that the two camps are not far apart.  I agree that
we need code that maps each side into the Web data model, but I don't agree
you need app servers to do it.  What we are fundamentally discussing is the
"resource" issue.  Currently the way this works is that both sides develop
a WSDL file that they can agree on (yes, I know this requires previous
knowledge, but perhaps that is consistent with the Web services use case
and motivation) and exchange SOAP messages conforming to the WSDL.  Each
side is responsible for mapping the message into and out of the respective
software system "domain" whether middleware or packaged application.

** I am really not sure what the compelling benefit is of defining Web
service operations as resources instead of defining them within WSDL files.
I actually think if we can solve this one we can resolve almost the entire

It isn't an either-or situation. There's no reason why WSDL can't play a
role for both styles. What is missing from WSDL is the ability to describe
and reason about
what you get when you dereference a URI that is returned (or sent!) as data
in a
SOAP message. Something that the WSDWG will have to consider eventually.

The CTO goes to his ORB and MOM vendors and asks for a solution. They
will say: "The next version of our products will support SOAP and WSDL.
So you just need to upgrade and you'll get interoperability." But what
does this mean? Both can support SOAP, but that does not mean that two
services will have a common addressing model. It does not mean that they
will have common operations. It does not mean that they will have a
standardized information representation. What concretely does the client
gain from the fact that the ORB and MOM vendors have both wrapped their
proprietary information model in XML angle brackets and SOAP envelopes?
(actually, SOAP does not even require angle brackets)

** It is significant if those vendors have tested and certify
interoperability with each other, as SOAP Builders have done, and WS-I is
doing more broadly.

Or let's talk about WSDL. So the client now has a WSDL definition for
both services. They have a very concrete representation of the distance
between the two. They *still* have to write glue code to integrate the
addressing models, MEPs and operations.

** No, they have to write glue code to map the WSDL representation of the
message into and out of the middleware systems, which has actually already
been done, and works.

> If Web services are not useful for interoperability across existing
> software systems, I do not think they have another compelling
> reason for existing.

That's a surprising statement! Doesn't interoperability of new systems
count for anything? I thought that Web Services would allow new kinds of
applications to be built!

** Sorry, I meant "non Web services" applications.  But it is more
significant to bridge existing than new applications.  Web services will
not succeed if they only work with new applications.

Agreed, but regardless, existing applications have to adapt to their
counterparts. Whether you call this a new application, or an adaptation
is irrelevant. SAP doesn't speak native Oracle ERP. It is the layer of
abstraction between the two that will enable the two to work together
and *that* is new.

Received on Sunday, 21 July 2002 23:29:02 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:40:57 UTC