- From: Gregory J. Woodhouse <gjw@wnetc.com>
- Date: Sun, 13 Apr 1997 08:03:30 -0700 (PDT)
- To: Foteos Macrides <MACRIDES@sci.wfbr.edu>
- Cc: fielding@kiwi.ICS.UCI.EDU, uri@bunyip.com
On Sat, 12 Apr 1997, Foteos Macrides wrote: > The rules for resolving partial/relative URLs since the > beginning of URL time have been such that if relative symbolic > elements end up at the beginning of paths they should be retained, > e.g., you can end up with something like: > > http://host/../foo/blah.html > > but Netscape's parsing ends up stripping lead relative symbolic > elements yielding: > > http://host/foo/blah.html > A lot of people will probably expect something along these lines because under Unix, both "." and ".." refer to the current directory in the root directory. So, for example the "/etc/" and "/../etc/" are equivalent. But, this isn't true of other operating systems (like Windows NT). Besides, Unix sdrvers will typically not follow Netscape semantics because they implement URLs through the file system and "public_html" is no the root directory. Basically, I don't like this kind of pre-processing. It serves only to imitate the file system semantics of one operating system (sort of), an it doesn't add in any way to the URL mechanism. (By contrast, I think relative URLs to express something different from the absolute URLs to which they resolve.) > with the consequence that many people are putting HREFs and SRCs > in their markup which by "valid" parsing rules yield lead > relative symbolic elements, and sending of "false bug reports" > to non-Netscape browser developers with one or another variant > of: > > "It works fine with Netscape." > > I can see retaining the lead relative symbolic elements > in ftp URLs for personal accounts (would generally fail for > anonymous accounts), but to my knowledge no http or https server > would accept such paths, so there's that kind of justification > what Netscape is doing. > It's good programming practice to be tolerant of errors (on the part of the user or otherwise). I suspect some application was generating incorrect URLs of the type you describe, so Netscape added support for them. I don't see this as a problem. But the robustness principle doesn't apply to specifications in the same way. > I would appreciate your and others' opinions on whether > it would be good or bad for other browsers to reverse engineer > for that Netscape URL resolving. > > Fote > > ========================================================================= > Foteos Macrides Worcester Foundation for Biomedical Research > MACRIDES@SCI.WFBR.EDU 222 Maple Avenue, Shrewsbury, MA 01545 > ========================================================================= > > --- gjw@wnetc.com / http://www.wnetc.com/home.html If you're going to reinvent the wheel, at least try to come up with a better one.
Received on Sunday, 13 April 1997 11:05:08 UTC