Re: FW: LC Comments: Web Method Feature

Sorry for the delay. I'm not sure this is the best forum to discuss all
of this but people tend not to want to move to rest-discuss so I guess
I'll continue here. If you have an alternate forum you prefer, I'll be
glad to move there.

Amelia A Lewis wrote:
> > You do not need separate wire protocols to get synchronous versus
> > asychnronous behaviours or push versus pull. If you did, the *web could
> > not exist* because people use it every day in synchronous and
> > asynchronous ways, push ways and pull ways.
> I think we use different definitions of these terms.  If http can be
> regarded as an asynchronous protocol, then my definition of asynchronous
> is quite faulty.

I didn't say that HTTP is an asynchronous protocol. I said that HTTP can
be used to get asynchronous behaviour. But then, the same seems exactly
true for SMTP. After all, my MUA make a TCP connection to an SMTP server
and they actually chat back and forth quite a bit (probably more than
they would in a REST model). So I claim that neither HTTP nor SMTP are
truly asynchronous protocols. They are synchronous protocols that can be
used in store-and-forward systems (which is often what people mean by

> > First you're the server. Then I am. Similarly, SMTP->Web gateways
> There is no such thing as an SMTP->Web gateway, within my knowledge.

I didn't mean for this one phrase to launch a sub-discussion, but if I
send mail to a server which then mails me back a Web document, isn't
that an SMTP->Web gateway? I understand that these are used to get
around firewalls and censorship. I've had full TCP connectivity since
the invention of the Web so I've never had to use one myself.

> Calling this web->email might be acceptable, but calling it web->SMTP
> elides important information needed in the design of the protocols.  If
> you try to use HTTP, in a good RESTish fashion, for delivery of mail,
> you're a bit more likely to put a message on a server on which you have
> write permission, control access to it, and send the ACL and URL
> (somehow ... no more mail to send it through, mind) to the person it is
> intended for.  Likewise, instead of having a mailbox, you end up with a
> collection of URLs to various other folks servers.

Sure, you could do it in this way. Or you could POST directly to someone
else's mailbox to be more SMTP-like. I prefer the former because it
allows the receiver to decide whether to download (e.g. large documents,
or recognizable spam). I wouldn't say that one solution is more REST-y
than the other. They do have different tradeoffs.

> The client end of email can be reasonably simulated by software
> applications accessible via HTTP.  This does not replace the servers.

Of course not. But we're talking about the protocols, not the software.
Sendmail would still be Sendmail if it spoke HTTP: except that it would
probably be an Apache extension rather than a standalone app on its own

How does this all relate to xml-dist-app? Well, there are basically
three different models being discussed either implicitly or explicitly:

 * the standard Internet model is that there are only a few widely
deployed "resource manipulation protocols" (I'll avoid the word
application protocols). Resource manipulation protocols move
representations of identifiable information resources from place to
around the network. SMTP, FTP, NNTP and HTTP count. TCP, UDP and telnet
do not. These protocols are relatively application-specific but not
data-type specific.

 * the SOAP/WSDL model is that there will be many such protocols, and
they will usually be data-type specific (getStockQuote,
getPurchaseOrder, getGoogleSearchResult) therefore we need a "framework"
to make the creation of them easier.

 * the REST model is that a single protocol (or a small suite of
protocols) can be data-type agnostic and can support multiple
message-exchange patterns merely through conventions of use. This is
demonstrated by the fact that millions of people already use HTTP
(sometiimes tunnelling, but not always) for asynchronous message
sending, peer-to-peer chatting, travel reservation and almost everything
else imaginable.

Nobody is claiming that HTTP is the last protocol. I'm just claiming
that new protocols should do at least as much as HTTP does (in terms of
standardizing address spaces and method semantics etc.). It isn't
helpful to go back to the pre-HTTP days where every application had its
own unique addressing model and message exchange semantics. The world
where SMTP/FTP/IM/Web were artificially separated was bad enough, but
the gateways between them were managable (if less fully functional then
I'd like). But now we are faced with a world where every industry
creates its own mutually-incompatible protocols with their own
addressing schemes and methods. The gateway approach will not scale well
to this level of incompatibility.

> > OSI terminology is pervasive in the networking industry. It is still the
> > default reference stack.
> Umm, I don't think I understand what you mean.  It has never been the
> default reference stack; it has long been the default description of
> layering, partly because it was well-presented, partly because no one
> has tried to explain why TCP/IP's simpler, more pragmatic semantics are
> a better fit to reality.

Fine, OSI is still the default *descriptive* stack for networking
reference texts and discussions. In other words, one does not say "Layer
7" because they think that the Internet has seven layers. They say
"Layer 7" because that's the way to get networking people to understand
that you are talking about the application layer.
Come discuss XML and REST web services at:
  Open Source Conference: July 22-26, 2002,
  Extreme Markup: Aug 4-9, 2002,

Received on Monday, 15 July 2002 18:08:13 UTC