Re: section 1, intro, for review

This is one of the posts on this that has got me thinking (often a challenge).
I'd love to be able to address anything via a URI - we have just gone through the hassles of defining portal security GUID's to link to Active Directory and then http URI's for web resources when, for me, the GUID is just another URI resource. That is one example!

The only part that worries me is how a "URI" would be interpreted.It's easy when it's mailto or http as the consuming clients know how to process this, but what about uri:someguid .... sure in the DCOM/CORBA world we have IDL, but in the URI world we don't (unless RDF was somehow brought in at this point, which it probably should, but then we move into the world of WSDL). I then blur the world of web services with what a URI is for... in this area i could see shared memory.

That of course assumes that a URI provides something more than semantic information; in most of my cases they identify actual resources.

cheers,
Steven
http://deltabis.com/steven

----- Original Message ----- 
From: "Roy T. Fielding" <fielding@apache.org>
To: <noah_mendelsohn@us.ibm.com>
Cc: "Paul Prescod" <paul@prescod.net>; <www-tag@w3.org>
Sent: Tuesday, March 19, 2002 6:49 AM
Subject: Re: section 1, intro, for review


> This response is in general to the discussion, even though I only reference
> one prior message.
> 
> > I don't think that proves that a shared memory (REST) is the right model 
> > for all the resources we may reasonable want to integrate into the web. At 
> > best it leaves the question open.
> 
> I am unclear as to where you got the idea that REST is a shared memory.
> It defines an architectural style for building applications in which
> components interoperate via a standardized interface.  What is on the
> other side of that interface is unconstrained.
> 
> > Furthermore, I think we need to admit that even when REST is applied, 
> > there are two different sorts of cases.  I would compare these to 
> > Load/Store into the memory of a computer, and Load/Store into the I/O 
> > space. 
> > 
> > I can generally store into the memory of a computer, or into the memory of 
> > the web (e.g. PUTing an html document)  I can then do a load (Get) from 
> > the same address (URI) and retrieve what I put in.   There's not much else 
> > to say about it.  A pure memory.
> 
> Do you honestly think that the Web is so limited?  How do you account for
> the web-based control interfaces in routers, refrigerators, air conditioning,
> etc.  They exist already.  Check out your nearest IBM innovation center.
> 
> > An I/O bus is trickier.  At one level, it looks like the memory: 
> > Load/Store.   However, if I store into the right location in the I/O 
> > space, magic happens.  The CD ROM drive spins faster.  Maybe if I load 
> > from the same location I get back in indicator of the new speed, or maybe 
> > not.  Modeling the disk controller as load/store, I get to use lots of 
> > hardware and software mechanisms that are common with the real memory 
> > example.  That's why we try to use REST where we reasonably can on the 
> > web;  common abstractions, shared implementation mechanisms, and to some 
> > degree we can reason about the persistent state of the web.  BUT:  in 
> > another sense, there's something very different going on with the CDROM. 
> > The semantically interesting specification for the spinning disk is not 
> > "load/store", it's "Set speed", "Seek", "Eject", or whatever. Behavior is 
> > encoded in the addresses.    This is very, very different from a world of 
> > shared memories or web documents.
> 
> No, it is all just content.  It is possible to define systems using either
> data integration or control integration, but usually there is some
> combination of both.  REST simply limits the control integration to
> common semantics for all resources.  It is up to the application developer
> to lay out their resource space such that it matches the intuitive
> controls of a CD ROM (or whatever) and to match that with standard data
> forms that can be understood when exchanged.  It usually takes folks a
> few tries before they get it right (information design is hard).
> 
> Applying REST does not result an ideal interface for all forms of behavior.
> It simply results in a common interface that is very flexible and very close
> to ideal for one common form of behavior.
> 
> > I think this teaches us that when REST is used to manage memory-like 
> > resources on the web, we can get by with that one level of description. 
> > When REST is used for other resources, the higher level semantics are at 
> > best tunneled through a  Load/Store abstraction.  Sometimes that tunneling 
> > is worth doing, sometimes it's better to admit that the resource is not 
> > memory-like, and use a protocol more directly appropriate to the resource. 
> >  I think we should at least leave that option open in the web 
> > architecture. 
> 
> I don't.  The reason we have a Web architecture is the same reason that we
> have building architecture: so that the properties that we want our system
> to have remain in force throughout its implementation lifetime.
> 
> <rant>
> 
> The simple fact of the matter is that object-specific interfaces do not
> work across multiple organizational domains, no matter how many standards
> groups are convinced to standardize their cataloging and versioning.
> That is a lesson that should have been learned from all of the previous
> attempts to move DCOM and CORBA-like services onto the Internet.
> I think any such "improvements" should be required to demonstrate their
> effectiveness in practice, over a suitable length of time, before they
> can be considered as legitimate extensions to Web architecture.
> 
> I think the Web works because it prevents object-specific attributes from
> defining the interface to the system.  Therefore, allowing such attributes
> to become part of the Web architecture, simply because a marketing campaign
> deems them to be so, is not only foolish but is contrary to the future
> well-being of the system we have worked so hard to create.
> 
> Personally, I think it is completely outrageous when people claim that the
> Web architecture should be the one and only Internet application
> architecture.  I can understand the desire for URI to be universal, but not
> for the entire Web architecture. I like using fetchmail to grab my mail
> (how it does so is not very relevant because mail is a store-and-forward
> application architecture).  It doesn't make sense to limit the notion
> of Web architecture to URI just because it is the only universally
> applicable element.  URI is not sufficient to describe how the Web works.
> There are many systems outside of the Web that already use URI for
> identification, so identification alone does not define the Web.
> 
> The fundamental notion that defines the Web is the interconnectedness of
> resources -- that everything which can be identified can also be
> ** indirectly ** described, manipulated, and related to other resources,
> and thereby can be traversed as an information space even when the
> resources themselves are not limited to documents.  This is possible
> because all interactions occur through a limited window of visibility.
> Whether an automobile viewed through the window represents a physical
> manifestation or a virtual simulation or a static image is completely
> irrelevant to the interface, even though it may be significant to the
> person using that interface [this is the place where RDF was supposed
> to be of benefit].  What prevents us from driving automobiles through a
> Web interface is the same thing that prevents us from doing so through
> object-specific interfaces: interaction latency and inadequate feedback.
> However, I can crash a car through a Web interface, and therefore the
> theory that a URI cannot identify a non-document resource is clearly false.
> There must be a hundred robots on the Web that prove it false.
> 
> The distinction between what is part of the Web and what is not
> part of the Web is very easy to define: if at any time two components
> interact using a mechanism that cannot be understood without knowing
> the nature of the resource being accessed, then they are not using the Web
> architecture for their communication and are therefore (for the scope of
> that communication) outside of the Web.  If the interface is not uniform,
> all of the beneficial properties of the Web architecture that depend on
> uniformity are lost.
> 
> [BTW, "component" is a term from software architecture -- a definition
> can be found in chapter 1 of my dissertation.]
> 
> TELNET services are not part of the Web.  A "telnet" URL can be used
> within the Web to identify access to a TELNET service as a resource, but,
> once access to it is provided, the Web interface cannot participate
> in any further interaction.  It therefore makes sense that browsers direct
> such communication into a separate application, with its own communication
> architecture.  Likewise for mailto (which has absolutely nothing to do with
> access to mail messages *after* they have been posted).  In contrast, the
> old Gopher  service was integrated into the Web architecture, with an
> acknowledged cost that the Web was unable to take advantage of some of
> the more advanced features of those services.  Again, the advantages of
> a uniform interface outweighed the loss of features.
> 
> </rant>
> 
> Cheers,
> 
> Roy T. Fielding, Chairman, The Apache Software Foundation
>                  (fielding@apache.org)  <http://www.apache.org/>
> 
>                  Chief Scientist, Day Software, Inc.
>                  2 Corporate Plaza, Suite 150
>                  Newport Beach, CA 92660-7929   fax: 1.949.644.5064
>                  (roy.fielding@day.com) <http://www.day.com/>
> 

Received on Tuesday, 19 March 2002 03:27:45 UTC