RE: The deep difference between request/response and fire-and-forget

Not sure who you are ranting to?  The community at large looks at MEPs
as abstractions on all bindings, and that SOAP magically provide the
silver bullet of transport independence.  Or at least, that's what the
ad campaign says.  We are all violently agreeing, that looking at soap
as simply a perfect abstraction of all underlying protocols is just
wrong.  You correctly say that the silver bullet abstraction is being
described incorrectly, to which I totally agree. 

 

Cheers,

Dave

 

________________________________

From: David Hull [mailto:dmh@tibco.com] 
Sent: Wednesday, January 25, 2006 7:59 AM
To: noah_mendelsohn@us.ibm.com
Cc: David Orchard; Rich Salz; xml-dist-app@w3.org
Subject: Re: The deep difference between request/response and
fire-and-forget

 





	To which I conclude this is yet another leaky abstraction. 
	    

 
Sure.  The point is not that abstractions shouldn't leak;  they 
necessarily leak to some degree, as Spolsky said when he set down the 
"law"[1].  

<rant>
For those not on the WSA list, here's my take on leaky abstractions:

*	Follow the instructions on the label.  If the abstraction you're
using is "a reliable connection with notification of failure," don't
pretend that the abstraction is "connection which will never fail."
I've seen this example in the context of TCP.  If  you use TCP and your
code breaks because you don't handle failures, TCP isn't leaking.  Your
code is broken.
*	You can't effectively implement everything on top of everything.
C++ templates turn out to be functionally complete, but if you try to
implement 32-bit addition by passing off to a C++ compiler using some
unary-based template hack, you can expect poor performance (at least).
That's not because 32-bit addition is a leaky abstraction.  It works
fine, in constant time, on several different kinds processor.  It's
because you tried to put an abstraction on top of the wrong
implementation.

Both of these are simply mismatches between an abstraction and the
adjoining layer.

For completeness, I'll note that Bad Things can always happen.  Your RAM
chips could get zapped by a cosmic ray and produce a transient parity
error, the network could die, North Dakota State could beat Wisconsin
(OK, maybe not such a bad thing, but one would have thought it
unlikely), or whatever.  In that sense, the second bullet point could be
"You can't implement anything perfectly on top of anything," and thence
"all abstractions leak."

Fair enough, but the point is that by using abstractions appropriately,
you can limit the effects of Bad Things to where if something bad does
happen, you've got bigger fish to fry.  If you see breakage in anything
less than a disaster, that's not because all abstractions leak, it's
because someone's misusing an abstraction somewhere.

Executive summary: Don't throw up your hands and say "all abstractions
leak, oh well."  Find the mismatches and fix them.
</rant>

I believe Noah makes much the same point below, albeit much more
civilly.

As I've said before [1] much of the present problem comes from trying to
overload a single abstraction (request-response or
request-optional-response as the case may be) to cover everything.
Rather than trying to do that, let's define small, crisp abstractions
that capture the properties of the protocols we're using and build on
top of those.

[1] http://lists.w3.org/Archives/Public/xml-dist-app/2006Jan/0135.html



The point is that if your high level abstractions use your low 
level services in the intended manner, it's less likely that the 
abstractions will leak in a damaging way.  Patrick is pointing out that 
the low level packet flows that underly TCP and HTTP are optimized for
the 
case where HTTP is used in the intended manner, I.e. Request/Response.
By 
properly separating Req/Resp from FAF, and using the layers in the 
intended manner, we greatly reduce the liklihood of "leakage" from 
low-level TCP packet flows, proxies, etc.
 
  

	I'm strongly against standardizing any MEP that can't be
deployed on
	HTTP.  That would be very very strange to standardize an MEP and
not
	standardize any bindings for that MEP.  It doesn't pass the
giggle test
	at all..
	    

 
I think you're mixing two things:
 
1) Should all MEPs be intended for use with HTTP?
 
Absolutely not.  In fact, the whole reason for MEPs is that SOAP is to
be 
usable over a broad range of "transports", and not all of them will 
comfortably support all MEPs.  However, if we can agree that two or more

transports support a one way FAF, for example, then the changes are
pretty 
good that the same apps will run on those transports.  So, the whole 
purpose of MEPs is to have different MEPs supported on different
bindings, 
and there's no reason at all from that perspective that HTTP should 
support one way.  Of course, if you have business reasons for wanting to

support one way on HTTP, that's different.  The discussion in this
thread 
suggests you can do it, but only insofar as you are willing to have the 
far end reply with a no content 202 or 204 message, and have the client 
spin off a thread or use some other means of properly receiving it, so 
that low level error traffic doesn't confuse proxies, etc.
 
2) Should we define an MEP before there's at least one binding spec'd to

use it.
 
Perhaps not. I think that's why we didn't do one-way in the first
version 
of SOAP 1.2.   David Hull and perhaps others are making the case that it

will so obviously be useful to the community that we should put the MEP 
spec out there.  Either way is fine with me.  I think it's clear that in

the particular case of one-way FAF we know the desired MEP semantics
well 
enough to risk spec'ing it without doing a binding, should we wish to.
 
  

	Another interesting related question: If it's illegal to close
without
	reading the return HTTP response, does that mean that an HTTP
	intermediary MUST wait for the next node's response to
faithfully pass
	back? 
	    

 
I might need to think more about it, but my initial reaction is: yes,
HTTP 
is request/response.
 
  

	Imagine intermediary closes with 202, but next node responds
with
	200 and body.  If it was legal to close without reading, then an
	intermediary could interpret the close as signaling that it
could also
	close after sending..
	    

 
I could be wrong, but my intuition is that when an HTTP proxy responds
on 
behalf of a server, it typically does not also send the request on down 
the second hop.  So again, a misuse of the HTTP model to even pass the 
message to the "next node", I would think.
 
All these are exposing reasons why req/resp is different than one way,
and 
why I think they are best kept separate.
 
Noah
 
[1] http://lists.w3.org/Archives/Public/xml-dist-app/2006Jan/0139.html
 
--------------------------------------
Noah Mendelsohn 
IBM Corporation
One Rogers Street
Cambridge, MA 02142
1-617-693-4036
--------------------------------------
 
 
 
 
 
 
  

 

Received on Thursday, 26 January 2006 23:27:34 UTC