- From: Champion, Mike <Mike.Champion@SoftwareAG-USA.com>
- Date: Mon, 22 Jul 2002 16:49:39 -0600
- To: www-ws-arch@w3.org
> -----Original Message----- > From: Newcomer, Eric [mailto:Eric.Newcomer@iona.com] > Sent: Monday, July 22, 2002 6:25 PM > To: Mark Baker > Cc: www-ws-arch@w3.org > Subject: RE: Generic/specific connectors > > > Well, in that case you may improve interoperability, but you > push more of the burden for processing onto the application > level. Meaning whatever is behind the generic interface now > has much more work to do to figure out what type of message > it's receiving, where it should go, etc. This does seem to be the crux of the debate. The REST/Web camp advocates applications taking responsibility for many of the details of state management, notification, reliability, security, etc. The other side (OMA / GXA / whatever) believes that the infrastructure should handle as many of these details as possible. Would you agree Mark? > If I have a program that I want to connect to another > program, I'm going to want to know something about the other > program, how it fits into the application I'm building, what > purpose the other program serves, what data I want to send it > and what I expect to receive in return. I can't build an > application using generic interfaces and know what I'm going > to get out of it. I've been wondering about this too. In the HUMAN CENTRIC hypertext Web, i.e. the web we know and love, this is not much of an issue because a human can look in a directory or search engine and follow "interesting" links, and look at the text and either use it or ignore it. Do we need a lot more "semantic" infrastructure before this works reasonably well for program-to-program communications?
Received on Monday, 22 July 2002 18:50:56 UTC