W3C home > Mailing lists > Public > xml-dist-app@w3.org > January 2006

Re: The deep difference between request/response and fire-and-forget

From: <noah_mendelsohn@us.ibm.com>
Date: Thu, 12 Jan 2006 18:18:27 -0500
To: Mark Baker <distobj@acm.org>
Cc: xml-dist-app@w3.org
Message-ID: <OFF2AA89DB.16C08593-ON852570F4.007FB8DE-852570F4.008036EB@lotus.com>

Mark Baker writes:

> In the case where the protocol supports optional responses, 
> that won't be the case.

I think you're missing my point.  If you're assuming that both the client 
and the "server" know in advance that the response is optional, then I 
agree.  The situation I was modeling was one in which the decision to 
respond was made by the server, which I think is what we've been 
discussing.  In that case, the client needs to know what to do.  I agree 
that there are pathological cases in which the client doesn't care about 
any possible response, but I'm modeling the case where the client wants to 
get the response if there is one. 

With those assumptions, the client must indeed do something resembling a 
wait to hang around for the response.  If a response is not coming, then 
the protocol must provide some means of letting the client know that.  The 
signal might be an explicit one, such as connection close packets on the 
wire, or an implicit one, such a a failure to send keep-alives (I.e. 
triggering a timeout).  Again, I'm not speaking specifically of TCP, but 
generically.  I don't think streaming or asynchrony changes any of this. 
If the client wants to receive responses, then it needs a signal of some 
sort of no response is coming.  If the server is making the decision, then 
the server must do something to kick the protocol in a way that will 
eventually cause the client to correctly conclude that waiting for a 
response is futile.

Noah Mendelsohn 
IBM Corporation
One Rogers Street
Cambridge, MA 02142
Received on Thursday, 12 January 2006 23:18:42 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 22:01:28 UTC