- From: Walden Mathews <waldenm@optonline.net>
- Date: Fri, 16 May 2003 13:18:54 -0400
- To: Ugo Corda <UCorda@SeeBeyond.com>, www-ws-arch@w3.org
> Well, reliability goes beyond acknowledgement. In the final analysis, it doesn't. It (RM) tries a little harder, but it relies on the same simple technique. And in the end, an audit (end-to-end state comparison) is the only truth. >In the HTTP case, I can tell if my request/response failed if I don't receive a response within the HTTP timeout time. But what if the HTTP receiver got the message but simply was not able to respond? If the request is not idempotent (e.g. increase a bank account by $1000), I cannot just resend the original request without worrying about "once and only once" semantics. That's true, but you could pretend you were the auditor, and cut to the chase. It could be argued that a good distributed object design would follow that principle. It could even be an architectural principle, not just a good practice. > > So HTTP itself needs something more for reliability, and that's why in the past things like HTTP-R were defined. How widely is HTTP-R deployed? If it's successful, why haven't I heard about it? If it's not successful, why would the next generation of the same saw be? (What would be different?) Walden
Received on Friday, 16 May 2003 13:14:36 UTC