Re: Assigned paths

"David W. Morris" <dwm@xpasc.com> writes:

>> If we're going to see a growth in "special" path patterns, I think we
>> need to quarantine them into a subtree so as not to collide with
>> pre-existing "normal" paths.  The alternative is to accept that existing
>> URLs will collide from time to time with newly-published special paths,
>> and that some breakage will occur.  I'm not too thrilled with that
>> choice myself.
>
>I think the right answer is that new patterns aren't needed. Proper use
>of HTTP/1.1 caching controls cover the issues.  The current patterns are
>provided to keep from breaking existing applications.

I'm sorry, I didn't mean to imply that caching was the issue here, but
rather offered the "/cgi-bin/..." case as an example of one of the only
formal statements of specific URL paths.  The issue that concerns me is
whether or not contstraints are being placed on the meanings of certain
portions of the URL-space on a server.  Another example, to my knowledge
still unpublished but widely understood, is the use of the path
".../robots.txt" to control the behavior of robotic clients.

While I agree with Roy Fielding that RFC 2169 should define one or more
new HTTP methods instead of specifying semantics for certain URL paths,
it looks to me like we're going to see more of the same, not less.  The
path to implementing a service within HTTP by specifying an object to
retrieve is significantly shorter than specifying a new method to
execute.  This will remain true even if PEP is ever deployed widely.

I'd rather have a process in place to reserve ownership of part of the
URL-space than have it portions of it taken over ad hoc.

Ross Patterson
Sterling Software, Inc.
VM Software Division

Received on Thursday, 26 June 1997 13:16:57 UTC