W3C home > Mailing lists > Public > xml-dist-app@w3.org > July 2002

Re: FW: LC Comments: Web Method Feature

From: Amelia A Lewis <alewis@tibco.com>
Date: 09 Jul 2002 13:33:22 -0400
To: Paul Prescod <paul@prescod.net>
Cc: "'xml-dist-app@w3.org'" <xml-dist-app@w3.org>, gtn@rbii.com
Message-Id: <1026236002.24372.52.camel@xerom>

On Tue, 2002-07-09 at 10:30, Paul Prescod wrote:
> Amelia A Lewis wrote:
> > 
> >...
> > > The only reason there even existed a concept of an FTP to SMTP gateway
> > > fifteen years ago was because FTP and SMTP used different addressing
> > > models and different method names/semantics. If those were unified into
> > Horsefeathers.
> > 
> > FTP and SMTP have dramatically different semantics: synchronous versus
> > asynchronous, "push" versus "pull" (much as I dislike that distinction).
> You do not need separate wire protocols to get synchronous versus
> asychnronous behaviours or push versus pull. If you did, the *web could
> not exist* because people use it every day in synchronous and
> asynchronous ways, push ways and pull ways. 

I think we use different definitions of these terms.  If http can be
regarded as an asynchronous protocol, then my definition of asynchronous
is quite faulty.

> It seems to me indisputable that if FTP and SMTP were the same protocol,

Surreal thought.

> there would be no need for a gateway. It is also *demonstrable* that
> (today) they could be the same protocol with no loss of expressiveness
> (quite the opposite, the unification would enrich both).


> > Adding URIs would have done zero good, at that point.  If you can't use
> > TCP/IP, or don't have an eight-bit clean connection, a library's worth
> > of URIs don't solve any problems.
> Okay, fair enough, there were other technical impediments at the time.
> That is not true today.

I think that, perhaps, you're overstating the ubiquity of communication.

> > Incidentally, URIs also typically suggest a strongly client-server
> > model, a pull model, and synchronous interactions.  All of those may be
> > good reasons to speculate on how to extend URIs, or what good addressing
> > semantics are for asynchronous, or push, or strongly peer-to-peer
> > models.
> First, URIs are as useful for push as for pull. Second, client-server is
> increasingly the description of a transaction, not a topology. Every day
> people use HTTP in widely-deployed, popular, peer-to-peer programs.
> First you're the server. Then I am. Similarly, SMTP->Web gateways

There is no such thing as an SMTP->Web gateway, within my knowledge.

There are a great many web->SMTP gateways, for sending mail.  There are
also a great many IMAP->web, POP->web, mbox->web, and even Maildir->web
solutions for retrieval of mail.

Calling this web->email might be acceptable, but calling it web->SMTP
elides important information needed in the design of the protocols.  If
you try to use HTTP, in a good RESTish fashion, for delivery of mail,
you're a bit more likely to put a message on a server on which you have
write permission, control access to it, and send the ACL and URL
(somehow ... no more mail to send it through, mind) to the person it is
intended for.  Likewise, instead of having a mailbox, you end up with a
collection of URLs to various other folks servers.

This is the difference between leaving letters in your mailbox, for the
mailman to take and deliver (passing through all sorts of places on the
way) to some other mailbox, and leaving a letter on a bench in the park
and telling the friend it's intended for where to find it.

The client end of email can be reasonably simulated by software
applications accessible via HTTP.  This does not replace the servers. 
It does not make HTTP asynchronous, in any sense.

> demonstrate that there is nothing intrinsically synchronous about URIs.
> So I don't follow what extensions URIs would need, or what you mean by
> "good addressing semantics." People even use HTTP ascynchronously every
> day 

I'll simply have to reiterate my initial comment, above.  I don't think
we have the same definition of asynchronous.  Mine doesn't include
request-response to a server as a requirement.

> OSI terminology is pervasive in the networking industry. It is still the
> default reference stack. 

Umm, I don't think I understand what you mean.  It has never been the
default reference stack; it has long been the default description of
layering, partly because it was well-presented, partly because no one
has tried to explain why TCP/IP's simpler, more pragmatic semantics are
a better fit to reality.

Please note that TCP/IP layering corresponds strongly with both hardware
(at the bottom of the stack especially) and with network semantics. 
Routers look at layer one and layer two, in order to throw away the old
layer one envelope and create a new one for the next hop, determined by
examining the layer two envelope (in OSI, look at layers one, two, and
three; possibly create new envelopes for layer two as well as layer
one).  Gateways, firewalls, proxies, caches, NATs and the like typically
examine layer three (TCP or UDP) to determine what to do (and may
examine some parts of layer two, and in some cases may look at bits of
layer four, but never look down to layer one, for instance).  A
corresponding OSI stack application would be examining primarily layers
three and four, with looks upward into five and seven, and downward to
two, but in the OSI model could reasonably look at absolutely anything,
and has to know much too much.  TCP and UDP have direct connections into
the application layer: the port number.  Applications may have access to
the TCP/UDP layer, but probably don't reach down to the IP layer.  Note
that the only widely-deployed (and widely-deplored) TCP/IP protocol that
can be said to live at OSI layer five is Sun RPC, which didn't
originally use TCP, instead reinventing reliable delivery semantics over
UDP, so that the concept of a session could be included.  I don't know
of any protocol that can reasonably be assigned to OSI layer six
(although SSL/TLS fits the description--encryption, encoding,
compression--, it is certainly *not* that high in the stack), only the
NVT abstraction (which is arguably in OSI layer seven/internet protocol
layer four).  Protocols in the TCP/IP stack fall into layer two (IP,
ARP, ICMP), layer three (TCP and UDP, also T/TCP and the more recent
SCTP), and layer four (applications), and the stack is designed with
that in mind--there isn't support for session ids in TCP/UDP, there
isn't an expectation that the port will indicate an intermediary
presentation or session protocol that itself indicates the application
protocol; one cannot *tie* session semantics to connection semantics
because TCP doesn't support it.

The TCP/IP stack is designed for stupid networks with smart endpoints;
the OSI model was heavily influenced by telcos who imagined smart
networks with dumb terminals.  One of the unexpected consequences of a
network designed to be stupid is that it's *faster* and makes better use
of bandwidth, and the routers and such are cheaper (less complexity
means fewer widgets).

Sorry, proselytizing again, and I suspect most folks already know the

> I do not claim to know all of the history of
> the Internet. I do know that the assumptions of thirty years ago should
> not go unchallenged into the next millenium.

Then I'd suggest that the theoretical-only OSI stack be one of the
things to go first.  It isn't practically useful for protocol design,
and is misleading for those who don't take the time to examine *why*
there are only four layers in TCP/IP.

Amelia A. Lewis
Architect, TIBCO/Extensibility, Inc.
Received on Tuesday, 9 July 2002 13:33:47 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:59:10 GMT