Re: PEP Battle Plan [rexmit, garbled]

> I realize this is heresy here, but I have to wonder if it's worth
> building the extension mechanism into HTTP.  An efficient URI
> resolution protocol would allow for a smooth transition away from
> HTTP 1.x and to 2.x or other protocols (smb? webnfs? multicast?), 
> without invalidating old clients and without the overhead of 
> establishing a TCP connection.  New protocols could then be designed 
> from scratch to take into account everything that has been learned 
> from HTTP, without inheriting the complexity.

Yes, if

   a) there was an efficient URI resolution protocol with sufficient
      deployment and compatibility with old clients;

   b) new clients were developed with such a dynamic protocol interface.

That is unquestionably the best way to improve the Web's extensibility
in terms of new protocol capabilities, and I support it whole-heartedly.
However, I've been supporting that for three years now and it is no closer
to being a reality now than it was then, in spite of a lot of excellent work
by some brilliant people.  The real problems are economic, not technical,
and until the service exists it is difficult to say what it will accomplish.

> It would also improve scalability, fault tolerance, ability to 
> screen files (for content ratings, price, language, etc.) before 
> downloading, selection of multiple variants of a resource (by 
> allowing the client, rather than the server, to make the selection),  
> client selection of multiple locations of a resource, etc. 
> 
> Seems like we need to take a step back and look at the web as a
> whole before we commit to the direction of adding more complexity
> to HTTP.

I do not see any conflict here.  While a URI resolution service would
add a selection and indirection layer above the hard-wired URL, it does
nothing to change the work required to actually apply a method to the
resource once it has been located.  That work will still need a protocol
that understands hierarchical proxies and can reasonably sustain
extensions which ensure robust handling across all recipients.  Since none
of the above-mentioned protocols can do what HTTP/1.1 already does, let
alone support the reasoning about extensions proposed by EP/PEP/whatever,
I don't see the creation of a URI resolution service as having an impact
on how or how not to extend HTTP.

Having said that, I would also not commit the IETF toward any single
direction, PEP or otherwise.  I am inclined to leave research to individuals
and not assign things to a WG until the solution is (at least believed to be)
known.  The problem is deciding when that transition should occur, and
whether it should be part of this WG or a different WG or outside the IETF.

Finally, I think there is something missing from this discussion. Extensions
have occurred, and will continue to occur, regardless of IETF opinions.
They occur because users need them, which results in implementers
implementing them, often in spite of the WG's recommendations.
Almost all of my work invested in HTTP/1.1 was toward making the protocol
*more* extensible in areas that had previously faltered due to poor
implementations, so that people out there can implement what they want and
at least have some inkling of what effect it will have on correctly
implemented applications.

We now have a protocol that should be able to sustain any optional extensions,
but we also know that some people want more than just optional extensions.
Not defining son-of-PEP will not stop people from making those extensions,
nor would it make those extensions any less complex in terms of their
addition to HTTP (quite the opposite, in fact).  A better question, then,
is whether we would prefer those extensions to be added within a framework
approved by the IETF, or one outside the IETF?

 ...Roy T. Fielding
    Department of Information & Computer Science    (fielding@ics.uci.edu)
    University of California, Irvine, CA 92697-3425    fax:+1(714)824-4056
    http://www.ics.uci.edu/~fielding/

Received on Monday, 21 October 1996 21:38:49 UTC