W3C home > Mailing lists > Public > www-ws-arch@w3.org > December 2002

RE: Does RM make a qualitative difference?

From: Assaf Arkin <arkin@intalio.com>
Date: Tue, 17 Dec 2002 14:01:15 -0800
To: "Mark Baker" <distobj@acm.org>, <www-ws-arch@w3.org>


I would agree with you that probably 99% of messages are successfully
transported, so we're not going to solve the reliability issue of the
Internet at large, we're not going to even make a dent you can measure.

For a service that does not make a difference.

Message loss is a generic term that describes not just transport but the
ability of the message to arrive within a specified time frame without being

If a service experiences a technical difficulty that causes a significant
portion of its messages to be lost (e.g. delayed on transit or modified by
an attacker), than for that service it's a real problem, even if that
service falls in the 1% unfortunate category. In general, a service does not
want to be in the 1% category and will do anything to avoid that.

From a totally different perspective, if a service does indeed have such a
mechanism it may opt to use less reliable protocols that have lower latency,
and switch to more reliable protocols in the unexpected case that a message
gets lost.

For example, the service may decide to use UDP (or even IP multicast)
instead of SMTP. It may send the original message using SMTP, but since SMTP
retries could over time expand to 10 hours (or something significant like
that), it could speed up the resend process by switching to HTTP on the
second attempt.

So while it's not going to make much of a difference for the reliability of
the Internet as it stands, it will make a significant difference for each
service that is able to utilize this capability.


> -----Original Message-----
> From: www-ws-arch-request@w3.org [mailto:www-ws-arch-request@w3.org]On
> Behalf Of Mark Baker
> Sent: Tuesday, December 17, 2002 1:20 PM
> To: www-ws-arch@w3.org
> Subject: Does RM make a qualitative difference?
> In trying to solve the reliability problem with the approach currently
> being discussed (in contrast to the reliable coordination approach
> I've suggested), I wonder what we're accomplishing.
> Let's say that the Internet currently successfully transports 99% of
> messages.  It appears to me that what we're discussing is a solution
> that will just up that number to, say, 99.5%, at the cost of increased
> latency due to message retransmission (a reasonable trade-off in many
> cases).
> If that's correct, is it enough to actually make a qualitative
> difference to an application developer?  Or are they still going to have
> to deal with lost messages?  I believe it's the latter, which is why I
> suggest that our time would be best spent focusing on how to help
> application developers deal with reliable coordination.
> Thanks.
> MB
> --
> Mark Baker.   Ottawa, Ontario, CANADA.        http://www.markbaker.ca
> Web architecture consulting, technical reports, evaluation & analysis
Received on Tuesday, 17 December 2002 17:01:48 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:41:01 UTC