W3C home > Mailing lists > Public > www-ws-arch@w3.org > January 2003

RE: Proposed text on reliability in the web services architecture

From: Assaf Arkin <arkin@intalio.com>
Date: Fri, 10 Jan 2003 12:13:06 -0800
To: "Jean-Jacques Dubray" <jjd@eigner.com>, "'bhaugen'" <linkage@interaccess.com>, <www-ws-arch@w3.org>

> Let me think a split second...ah yes, I use a process engine which
> supports BPEL4WS. This execution language provides all the facilities to
> support the scenario that you are talking about (We actually wrote the
> XML definition of this exact scenario to see if BPEL was a good fit for
> us and it works quite well). I am glad that you kept the "transform" out
> because BPEL does not support transformations. So if you need one, you
> are out of luck. We also had to create a special container for "process
> variables" to manage the loop but that's not so bad.

I didn't keep the "transform" out. I actually find the transform very
useful. I also find reduction of state variables useful. But the model
should be indepdenent of the language. In other words:

- The transform may be as simple as m->m, in which case BPEL4WS would fully
conform to the model.

- The transform may be m->m' require, say XSLT, in which case BPEL4WS still
conforms to the model, but the manner in which you associate a transform
with an action is not written in either XSLT or BPEL4WS leading to a
proprietary non-portable solution to the problem. So BPEL4WS still observes
the model since it allows XSLT to be used, it just doesn't do it as well as
it could. (At least two proposed implementations of BPEL4WS I know of do
that, that is, use XSLT but make the association between the XSLT and the
action a proprietary non-portable configuration)

- Another language may provide you both a way to encapsulate the XSLT in the
process definition reducing the depdenency on a proprietary association
between the two, and may also allow you to used pre-defined state variables
to manage the iteration eliminating the redundant container you describe.
BPML is an example for such a language.

> Note that in the past, our PLM system would handle the logic of this
> scenario. However, there is real value to extract this kind of logic
> from the enterprise system itself. In the case a company has multiple
> PLM and ERP systems for instance. This is not uncommon. As far as I can
> remember, BOEING had 84 procurement systems at one point. I am pretty
> sure they might have more than one PLM system.


> The fact and the matter is that heterogeneity in IT is a fact of life.

And not a bad thing, otherwise it would never happen. One example we
commonly see is mergers and acquisition, where company A uses ERP vendor X,
buys company B that either uses ERP vendor Y or uses ERP vendor X but with a
different configuration that does not allow consolidation.

So what the business is saying is: I can make more money buy buying another
company since the business benefit is considerably bigger than the cost of
integration. And: I can make even more money if I can reduce the cost of
integration. Which is where we you and me and people like us come into the
picture ;-)

> To me and to my company the real question is really, how can I
> use/influence these new technologies like web services, BPM, .NET, J2EE
> such that I can offer a better architecture, develop features more
> easily, enable a swift customization while preserving upgradability of
> the system (this is really the bulk of the revenue, remember).
> Well one thing that is obvious today is that integration of an
> enterprise system with its environment increase the value of the whole
> system. Web services in my opinion adds new integration paradigm. For
> instance, as a user, the web has transformed my ability to reach out
> valuable information across the globe (weather, travel, news, laws, jobs
> ...) as well as act on the world at large (well limited to shopping or
> voting for the moment). Information that was almost impossible for me to
> reach before. Similarly, web services would enable my enterprise
> system(s) to reach out information or provide information to a very
> large number of services and hence enables the users of these enterprise
> systems to be more informed when they use the system itself, or reach
> out quickly to other enterprise systems across the world if needed. To
> me this is really where loose coupling makes sense, no so much in the
> BOM-PO scenario which can still deal with a fairly tight coupling.
> That's another real paradigm shift and a blow to all the vendors that
> tout that web services is the new EAI only better and cheaper, and yes
> because it does EAI it can also do B2B because B2B is just like EAI,
> only bigger...

I'm not sure it's the new EAI as much as it's the new EAI ;-) What I mean
is, it solves problems of the EAI of yore at a lower cost, but that's not a
reason to call it new EAI. On the other hand, because it reduces the cost it
also allows you to do new things you would not consider before, which means
you would have more challanges to solve. So it's a new EAI that focuses less
on getting A to talk to B and more about creating new value propositions. We
like to think of it in terms of an opportunity for new value-added

> The evolution of the architecture of these systems will also go towards
> a new MVC implementation where the M-V-C tiers are completely separate
> from each other and where the process-oriented business logic is
> completely separate from the model oriented business logic. This
> architecture will enable readily data and process federation. To me it
> is clear that XML and web services can lead to that evolution, provided
> that there is a bit of consciousness and responsibility from the
> "standard" members. Otherwise, the first one that could articulate such
> a comprehensive application model will win the prize !!
> For these kinds of reason, it is clear to me that the vast majority of
> enterprise software will undergo massive transformation if not rewrite.
> Sorry for those who think they can live with their 10 year old
> client/server architecture.

I guess we differ on what we conceive as "massive rewrite". I can see how a
vendor could do WS by simply adding WS gateway/glue to their existing
software. Of course, overtime they would consider reducing cost by making WS
more ingrained into the system. So you would have a massive rewrite. But
would that rewrite result in radically different implementation, or only in
different implementations?

In other words, if you have a banking application that uses an ISAM database
and you then transform it to an SQL database, often a massive rewrite, do
you end up with a radically different applications? Or do you just end up
with pretty much the same application, with considerable improvements here
and there?

I would speculate that what we would see is a mix. On the one hand you would
have applications that are better refactored to be better integrated with WS
and slight changes in behavior as a result of new opportunities. But most
vendors would consider whether they should rewrite everything from the
bottom up, or use Web services to build new value propositions for things
that did not exist before. So an ERP vendor could say: "let's rewrite the
ERP" from the group up, or they could say "let's fix what is broken and make
it better, and let's make integration with the CRM part of the product".


> Jean-Jacques Dubray____________________
> Chief Architect
> Eigner  Precision Lifecycle Management
> 200 Fifth Avenue
> Waltham, MA 02451
> 781-472-6317
> jjd@eigner.com
> www.eigner.com
> >>-----Original Message-----
> >>From: Assaf Arkin [mailto:arkin@intalio.com]
> >>Sent: Thursday, January 09, 2003 4:58 PM
> >>To: Jean-Jacques Dubray; 'bhaugen'; www-ws-arch@w3.org
> >>Subject: RE: Proposed text on reliability in the web services
> architecture
> >>
> >>> I am less optimistic than you are about the ERP systems, I think
> that
> >>> the constraints of XML, web services, and process engines will force
> a
> >>> massive rewrite because of customer requirements such as "data
> >>> federation" or "process federation" that are more and more critical:
> >>> when you have 30 SAP systems like some company I know, you really
> face
> >>> these issues everyday and they are completely in the way of your
> >>> business (not to mention when other systems need to get at the SAP
> >>> data).
> >>
> >>That raises an interesting issue.
> >>
> >>If you have two different systems, say SAP and Siebel, with their own
> >>messaging styles and datastructure and we-do-express-things-that-way,
> then
> >>it's obvious why have a disparity, and that this disparity can only be
> >>addressed if you have some common way of exchanging data.
> >>
> >>SOAP is part of the equation in giving you a uniform encoding, but you
> do
> >>need to use a common schema, or lacking a common schema have two
> schemas
> >>with identical semantics so you can transform one into the other.
> >>
> >>But as you pointed out the reality is that many cases the disparity
> has
> >>nothing to do with semantics. All too often you find two systems that
> have
> >>the same messaging style, use the exact same schema with identical
> >>semantics, so exchanging data by itself is not a problem. The problem
> is
> >>disparity in what they can do with the data.
> >>
> >>A PO is a PO and you can have a uniform way to represent it, but
> system A
> >>may only understand itemized products while system B may understand
> >>bill-of-material. In order for system B to talk to system A it needs
> to
> >>break the BOM into itemized products, and vice versa.
> >>
> >>So the problem now becomes, how does system B which has to fulfill a
> >>PO
> >>order break that into multiple POs that system X can understand,
> >>consolidates some of its POs into single POs for more effective
> >>fulfillment
> >>by system A, and then does the reverse in order to fulfill its part of
> the
> >>equation.
> >>
> >>The problem here is even if you used the exact same schema for both
> >>systems,
> >>so there's no mistmatch in terms of how data is represented or even
> need
> >>to
> >>do semantic mapping, you would still need something else to make it
> work.
> >>Which explains why a lot of companies have a different level of
> >>expectation
> >>from their integration processes. Beyond data mapping which is no
> longer
> >>"the problem" and into actual processes which is becoming "the
> problem".
> >>
> >>arkin
> >>
> >>>
> >>> JJ-
> >>>
> >>>
> >>>
> >>> >>-----Original Message-----
> >>> >>From: www-ws-arch-request@w3.org
> [mailto:www-ws-arch-request@w3.org]
> >>> On
> >>> >>Behalf Of bhaugen
> >>> >>Sent: Thursday, January 09, 2003 12:14 PM
> >>> >>To: www-ws-arch@w3.org
> >>> >>Subject: RE: Proposed text on reliability in the web services
> >>> architecture
> >>> >>
> >>> >>
> >>> >>JJ Dubray wrote:
> >>> >>> As you move the context of the discussion from an action request
> >>> >>> to interactions with a (distributed) object, you are introducing
> >>> >>> a whole new class of problems that people have wrestling with
> >>> >>> for years.
> >>> >>
> >>> >>The problems are there anyway.  They are not removed by
> >>> >>putting dispatchers and a Web service access point in front
> >>> >>of the distributed objects.
> >>> >>
> >>> >>If you get rid of the dispatchers and just interact directly with
> >>> >>Web resources which deal in representations of externally-
> >>> >>facing business objects, you just removed one or more
> >>> >>layers of complexity, but you still need a mediation layer
> >>> >>between the internal object and the external resource.
> >>> >>
> >>> >>As Peter Furniss says now and then, there is a fixed
> >>> >>amount of complexity involved in this problem, and
> >>> >>you can move the factors around and add unneccesary
> >>> >>factors, but you can't remove the essential ones.
> >>> >>(Peter says it better, but I can't remember his exact words...)
> >>> >>
> >>> >>(But not all factorings are equal...)
> >>> >>
> >>>
> >>>
Received on Friday, 10 January 2003 15:14:35 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:41:02 UTC