W3C home > Mailing lists > Public > w3c-dist-auth@w3.org > April to June 1999

Re: Some problems with the WebDAV protocol

From: Yoram Last <ylast@mindless.com>
Date: Thu, 22 Apr 1999 01:00:43 +0300
Message-ID: <371E4A8B.BF9DD0AB@mindless.com>
To: Greg Stein <gstein@lyra.org>
CC: w3c-dist-auth@w3.org
 
> As a server implementor, I am going to code to the spec. If you have a
> broken client, then TFB. From my point of view, the theory that servers
> build to spec is absolutely valid. Several times people have pointed out
> conformance issues with mod_dav. That is definitely a bug, so I fix them.
> 
> There is absolutely no way that I am going to help to propagate bad client
> programming practices. If a client doesn't interoperate with mod_dav and
> it is the client's fault, then I won't raise a finger.

It is certainly within your rights. So you are a good citizen and your
mod_dav is built with the goal of maximizing protocol compliance in world.
But if it was built with the the prime purpose of maximizing its own
popularity, you would try to maximize interoperability, and you would
do things differently. To be more concrete: Suppose that being fully
compliant creates severe interoperability problems with Microsoft's
client (which seems to be the case as we speak). It is clearly
the *only* truly viable client at this moment, and will probably be the
most widely used one for quite some time to come. Now it is fully within
your rights to say that it is *their* fault, but I doubt many people will
be using your module if it doesn't work with their client. Now if people
won't be using your module, its high level of compliance doesn't matter
much anyway. Since your module is free software, what is likely to happen
is that people will modify it to make it "useful." You will not even have
any control over that.

In fact, the specific political circumstances surrounding WebDAV suggest
that it will be controlled by Microsoft much more than by the working
group. If Microsoft decides, for whatever reason, that things should be
done differently than what the written spec says, the working group will
be able to choose between:
a) changing the spec to comply with their way of doing things
and
b) having some portions of the written spec becoming irrelevant.

> Your argument above seems to presume that implementors should compensate
> for buggy clients. That is simply Bad and Wrong. There is no justification
> for it.

Justified or not, this is what implementors do. Apache wouldn't be so popular
if it wasn't tolerant (and it even has workarounds for specific bugs in
specific clients). When AOL decided to break HTTP/1.1 at their proxies two and
a half years ago (see http://www.apache.org/info/aol-http.html), the Apache
group had to provide a workaround (which many people implemented) even though
they strongly disagreed with what AOL did. It's a good thing AOL came to their
senses and backed off, but it clearly was (and still is) within their power
to render certain portions of existing written specs irrelevant.

Now you have the right to hold on to your ideological beliefs on how things
*should* be done, but most people only care about how well they work. Basing
a protocol on the assumption that people will do things the way you think
*they should* (where in fact most people do things differently) is a basic
design flaw, because, at the end of the day, you are basing it on a *wrong*
assumption.

> The definition of PUT does **NOT** state that intermediates must be
> created. Therefore, I won't do it.

Fine. Then you are creating a minimal implementation that does not provide
the full functionality provided for by the specs.

> What will you do now? Your clients better be able to deal with that fact.
> Any number of other servers will respond similarly, and those clients
> should be able to deal.

It is precisely the same as implementing a WebDAV server that doesn't
support MKCOL. There is no way of "dealing" with that, because the
protocol doesn't provide another way of creating a collection. Clients
will not be able to create new collections on this particular server.

There was a long period of time when the Geocities free web hosting
service didn't allow the creation of "subdirectories". There was no
violation of the ftp protocol in the fact that their servers didn't
allow subdirectories to be created. Clients that tried a 'mkdir'
on their servers got an error message, and their users had to live
with constraints of being confined to a single directory level.
Now if someone would create a set of ftp extensions, and in the
process of doing that specify that servers should forbid the
current 'mkdir' command. Then the new type of server can be
fully ftp compliant, but all existing ftp clients would not
be able to create new directories on it. The fact that some ftp
servers didn't let them create directories even before that,
isn't relevant to this point.

> [please excuse the belligerance here, but I feel that you're not
> sufficiently backing up your claims... Jim has asked for real-world
> examples of problems and you have not yet provided them. From my point of
> view, you have not shown that anything must "be fixed".]

Virtually any client that supports PUT is capable of creating new
collections on servers that support it. So the number of capable
clients is clearly very large (they are on most desktops in the world).
Given that the world's most popular web server has an extremely flexible
implementation of PUT, it really comes down to guessing people's habits
and choices. Namely, how many system administrators decided to provide
it? And then how many users have the work habit of taking advantage of
it? I wish I new. But, sincerely, I don't. My intuitive *belief* is that
this functionality isn't very widely used, but that it is nevertheless
*somewhat* used. This translates to the *belief* that this whole thing
is *somewhat* of a problem. If you have a situation where being compliant
with the specs is even *somewhat* of a problem, then you might have a
*big* problem in getting people to comply. So I think it is *very*
unwise to have this in the specs.

Now I wish I had more factual data to determine the extent of this
problem, but I don't. If you are so sure it is totally benign, then
fine. One way or the other, time will tell. But if it turns out that
it is , after all, a problem of some real-world significance, then
the damage (in terms of undermining the protocol by rendering one of
its MUSTs irrelevant to the real world) would already be done.

> > implicitly, because they do a "write test" of this type before uploading
> > a file to a new location.) Issues like mis-types of intermediate collection
> 
> This is just bogus. DAV at least defines a specific behavior for
> conformance. That absolutely helps the situation. Clients don't need to
> "test" what will happen. They will simply know.

Clearly WebDAV is much better suited for HTTP-based content management.
That's what it is for. In the context of HTTP/1.1, what these clients do is
actually very clever, because it is the only way they have of minimizing
the chance that a large (and thus "expensive") upload will be rejected.

> Goody for them. That does not dispell the fact that clients that have not
> handled the situation for servers that have **NOT** implemented PUT this
> way. As long as those clients do not compensate, then they are broken.
> This is all quite valid per the HTTP/1.1 specification.

I fail to follow your line of thought here. How can a client "compensate"
for functionality that is not provided by a server? 

> > 1) Is it a bug that should have been avoided had it been thought of before?
> >
> > 2) Is it so huge a problem that it justifies by itself re-issuing the protocol
> > (or should play a major role in a decision to do so)?
> >
> > 3) Is it something that should be fixed in later revisions of the protocol?
> >
> > My answers are:
> >
> > 1) Yes it's a bug. A conflict of this type with HTTP/1.1 should not have been
> > introduced into the protocol.
> 
> This is your subjective opinion. I do not believe this behavior is a bug.
> The HTTP/1.1 specification supports my server behavior (that of refusing
> to create intermediate collections).

You are correct to say that you can write a server that is fully compliant
with both HTTP/1.1 and WebDAV. Like I said before, the problem is that you
*encourage* other people to create servers that are *not* compliant with
WebDAV.

> > 2) Most probably not. I believe that the actual interoperability problem here
> > is overall quite mild.
> 
> As long as the actual *fact* is that the problem is mild, then this whole
> issue is totally moot. As Jim has asked, please demonstrate where this
> "issue" causes problems.

I fully admit that I don't know the "actual *fact*," but I don't think
you know it either. You seem to *decide* that it is "totally moot"
because at this point in time it is the most *convenient* way for you
to deal with it. This doesn't mean it is the most *wise* thing to do.

> What clients are *dependent* on the
> create-intermediate-collection behavior?

It isn't clients that are dependent on anything here. It is the ability
of *users* to use these clients in order do certain things. The number of
clients that are *affected* is clearly very large. These are all PUT-capable
web publishing clients, and they are found on most of the world's desktops.

> The basic fact here is that RFC 2518 specifies a behavior that you do not
> agree with.

No. The basic fact here is that RFC 2518 specifies a behavior that
*takes away* certain functionality that is *allowed for* by HTTP/1.1,
and currently *implemented* by some HTTP/1.1 servers. For a variety of
already explained reasons, I think it is an *unwise* thing to do.

Consider for a moment Netscape's Enterprise server. It implements a
whole zoo of HTTP methods (SAVE, EDIT, INDEX, MKDIR, ...) that provide
WebDAV-like functionality in slightly different ways, and is useful
in conjunction with some of their clients. Now would it be within
the scope of the WebDAV protocol to say that a server MUST NOT support
these methods, and thus make it impossible for them to be fully  WebDAV
compliant while maintaining their current functionality? Or is it
within the scope of the WebDAV protocol to say that a machine running
a WebDAV server MUST NOT allow ftp-based access to the same content?
Whatever your technical arguments are for restricting PUT, they may
apply here just as well ("WebDAV provides a better way of doing that").
Or another example: Would it be a proper thing for the HTML 4.0 spec
to say that a compliant browser MUST NOT support certain HTML 3.2
tags ("because there in now a better way of doing things, and there
shouldn't be more than one way of doing them")?

What you did with PUT/DELETE is really very similar to all of the
above. It is not within the legitimate scope of a protocol to
take away functionality of other protocols, or to otherwise
change the semantics of the other protocol's methods.
You could legitimately do any of:
a) Use new methods.
Or
b) Use a header to distinguish WebDAV from HTTP/1.1.
Or
c) Accept and use HTTP/1.1 methods as they are defined in HTTP/1.1.

But you didn't.

I think that what you did do is a bug by very basic principles.
Now there may be a legitimate argument on how severe this is, and on
whether or not it should be fixed, etc. But it seems to me that the
core fact that this is a bug is clear beyond a reasonable doubt.


Yoram
Received on Wednesday, 21 April 1999 18:01:28 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 2 June 2009 18:43:49 GMT