- From: Mark Baker <distobj@acm.org>
- Date: Tue, 17 Dec 2002 16:19:35 -0500
- To: www-ws-arch@w3.org
In trying to solve the reliability problem with the approach currently being discussed (in contrast to the reliable coordination approach I've suggested), I wonder what we're accomplishing. Let's say that the Internet currently successfully transports 99% of messages. It appears to me that what we're discussing is a solution that will just up that number to, say, 99.5%, at the cost of increased latency due to message retransmission (a reasonable trade-off in many cases). If that's correct, is it enough to actually make a qualitative difference to an application developer? Or are they still going to have to deal with lost messages? I believe it's the latter, which is why I suggest that our time would be best spent focusing on how to help application developers deal with reliable coordination. Thanks. MB -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Web architecture consulting, technical reports, evaluation & analysis
Received on Tuesday, 17 December 2002 16:14:28 UTC