Re: HTTP vs. SOAP intermediaries

>
>  So it can know what the client was trying to do, the
>metadata about the payload, and whether or not what was attempted
>succeeded or not.

sure. but it's as strictly limited definition of what succeeded.
Take an example. An application submits a soap request for a purchase
of something.

* The server responds with a fault, that it doesn't sell that something.
   HTTP 5xx, HTTP intermediary concludes, purchase was not approved.

* The server responds with a fault, that the something was out of stock.
   HTTP 5xx, HTTP intermediary concludes, purchase was not approved.

* the server responds with a return value of false, with a message
   explaining why ("the item was out of stock"). *In a SOAP body*
   HTTP 200, HTTP intermediary concludes, purchase was approved.

>An HTTP intermediary doesn't have a total *lack* of knowledge, it can
>have a near total *understanding* of what is going on.  See Roy's blurb
>above.  I couldn't explain it any better than that.

in my case above, it should be clear that while the HTTP
intermediary might have some pretense of knowing what is going on,
this will be false, unless the metadata about the process model
is also made clear to the HTTP intermediary, and it is able to
read the packets.

Of course, one way to create that metadata is to mandate in the
SOAP specification that the server should not respond with a
200 OK unless the purchase was approved. That'd be a fun task!
And if we are going to make that ruling, then we should make it
for the web too, that a server cannot return a 200 OK unless
it is actually accepting the intent of the content, not just the
content

I think this is a relevent issue. In classic HTML+HTTP, the server
responds with a 500 OK if some delivery issue prevented the script
(or other content handler) from accepting the content request. But
if the script runs, then the script is most likely to return a 200
OK whatever the outcome. Unless the script wishes the user-agent to
behave in a particular fashion. To me, this sounds a lot like the
tunneling model for SOAP.

As a web developer, I *hate* caches, because they act as wild card
between the User-agent and my process. There's lots of reasons for this
but one of the drivers is that the intermediaries don't understand the
metamodel of the exchange running across them. (or, alternatively, that
the user-agent and the cache have different meta-models driven by
different agendas). (Cache's are a subset of intermediaries)

Can you nominate a different case, where the HTTP intermediary
can accurately infer what is going on without inspecting the
SOAP packet itself, and understanding the interaction model
that is driving the exchange?

Grahame

Received on Thursday, 4 April 2002 18:22:17 UTC