W3C home > Mailing lists > Public > w3c-dist-auth@w3.org > July to September 2000

RE: [hwarncke@Adobe.COM: Re: [dav-dev] Depth Infinity Requests]

From: Jim Whitehead <ejw@ics.uci.edu>
Date: Tue, 18 Jul 2000 10:51:24 -0700
To: WebDAV WG <w3c-dist-auth@w3.org>
Accidentally caught by the spam filter -- I've added Yaron's email address
to the accept2 list.

- Jim

-----Original Message-----
From: Yaron Goland [mailto:yarongo@crossgain.com]
Sent: Monday, July 17, 2000 2:41 PM
To: 'w3c-dist-auth@w3.org'
Subject: [Moderator Action] RE: [hwarncke@Adobe.COM: Re: [dav-dev] Depth
Infinity Requests]

Jim Doubek says: "Note that for realistic sized repositories, say 50K to
100K files, any depth=infinity request near the repository root is going to
be too expensive. For instance, an allprop request in such a case will be
several to tens of megabytes, and may take minutes to produce."

My company is looking to WebDAV to provide us with versioning functionality.
One of the most basic functions we perform is synching to a tree, which is
huge and which starts at the root.

Ignoring the issues of additional client complexity introduced by removing
propfind = infinity, without propfind = infinity network performance gets
significantly worse. What was previously a single request/response now
becomes Log(N) request/responses where N is the size of the longest chain in
the tree. Even if you have a fairly compact tree with one weirdo chain, your
performance is held hostage to that chain. Note: I am treating multiple
pipelined request/responses as being one request/response for the purposes
of counting round trips.

The proposal that we allow clients to submit individual integers is sort of
the worst of all worlds. It makes clients more difficult to program, it
makes servers more difficult to program (it is easier to support infinity
than a specific integer which requires keep track of a counter all the way
down the tree), it makes network performance worse and it doesn't map to any
common scenarios. In the majority of cases I have seen in my years of client
programming the client wants everything below a specific starting point. The
main exceptions are 0 and 1 but both of those are already covered.

As such I believe that we should keep depth = infinity and that we should
not allow for specific integers to be submitted.

I have a lot of sympathy for server makers who see depth = infinity as a
denial of service attack. It definitely screws server performance. The humor
of this thread is - We've already been down this path before. Once upon a
time WebDAV was only going to support depth 0 and 1. If you want to read a
summary of the arguments and how they were settled see the section entitled
<Out of our Depth> in

I completely agree with people who want to use the DASLesque solution of
returning a "Too much server processing time required" error message. I
think this is a great idea. One of the features my servers provide are
Service Level Agreements. I will check your SLA when I process your request
and if your SLA isn't high enough I will boot your depth = infinity request.
Obviously anonymous log-ins would get very little processing assigned to

Let's not throw the baby out with the bath water. Depth = infinity is a
powerful feature that covers one of the most common editing scenarios "Synch
this tree" in a manner that has great network performance. This is a big
benefit of WebDAV. However let us also empower server authors to reject
requests that abuse their server. I think that is the best compromise.


P.S. Long experience teaches me that a number of the server authors reading
this letter are gleefully rubbing their hands together going "Great! I will
reject every propfind = infinity request with a "too much server processing
time" error. Problem Solved!!!" I suggest you read
http://lists.w3.org/Archives/Public/w3c-dist-auth/1999JulSep/0359.html and
keep in mind that clients will treat the "too much server processing time"
error as catastrophic. This type of behavior probably won't help your server
Received on Tuesday, 18 July 2000 13:55:59 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 15:01:22 UTC