RE: Roy's ApacheCon presentation

>> I looked at presentation slides 4 and 5 in particular: "EAI - the hard way" and "EAI - the Web way". 
>> The first picture looks completely misleading to me. No EAI product approaches the problem that way (one-to-one interactions), and in fact the acronym EAI has become >>synonym with the idea that you do not do integration that way.

>Sure, ok, you're O(Nlog(N)) as a best case, and O(N^2) as a worst case
>(thanks Miles).  Still, would you agree that that's significantly worse
>than O(N)?

I think that interface simplification/unification only gives you a false sense of complexity reduction. The complexity (both syntax and semantics) that you remove from the interface you will find in different form in other places, like the document contents transmitted through the REST unified interface, or the choreography computations which depend both on the data itself and on other external factors. So the total complexity of non trivial real world system interconnections is always far from O(N).

I am also very skeptical about syntactic solutions (be they REST or Web services) being able by themselves to reduce systems complexity. Standardized syntax allows one node to yell at another with the assurance that the second node will hear the noise. Making sense of that noise is a completely different matter.

As Mike mentioned before, the S-word (semantics, as opposed to the s-word, syntax) tends to raise its ugly head and to be the primary factor in determining complexity. During more than 25 years dealing with computers, I have only seen two effective ways of reducing semantic complexity. One is to inject humans in the loop (sometimes in subtle ways, so that at first it seems like machines are doing all the work, while in reality they are not). The other is reducing the scope of the problem domain to the point that you can "hard wire" machines to deal with that domain and they don't have to face many "surprises" along the way (this is what happened with early AI, when machines seemed so intelligent just because they were dealing with well delimited domains, and then they miserably failed later on when the domains were expanded). I am still always hopeful that machines will get smarter, BTW, and I always watch with great interest things like Semantic Web and ontologies, but I would not bet big on them yet.

If we pursue decreasing semantic complexity by reducing the scope of the problem domain, then efforts standardizing and promoting reusability of interfaces along vertical industry domains, as Anne mentioned before, are very relevant to the discussion.

Ugo

Received on Wednesday, 20 November 2002 14:50:16 UTC