Four-part process for moving to Draft

The process of moving RFC 2518 to Draft Standard status has two main goals:

 * Revise RFC 2518, the WebDAV Distributed Authoring Protocol,
   to fix or address known functional or interoperability problems
   with the protocol. Protocol features which are not found in at
   least two implementations must be removed from the document.

 * Document the implementation and interoperability of protocol
   features, to show that at least two implementations, from
   independent code bases, exist for each feature.

Accomplishing these goals can be subdivided into the following four
actions:

1. Resolving existing Issues List items on the mailing list.

2. Hold a face-to-face interoperability event (b*ke-off).

3. Develop an online form to gather RFC 2518 implementation experience
   data.

4. Create a farm of existing server implementations for ongoing
   interoperability testing.

What is involved in each item?


1. Resolving existing Issues List items on the mailing list.

At present, there is a fairly significant list of known issues with the
WebDAV protocol. These are items that have been reported to the mailing list
over the past 2 1/2 years.  This issues list can be found at:

http://www.ics.uci.edu/pub/ietf/webdav/protocol/issues.html

Basically, for every item marked "Open", there needs to be a resolution of
the issue.  Item resolutions have three parts:

* Description of the issue

  A description of the problem, independent of any proposed
  solution to the problem.  The existing description of each
  issue is a good starting point for this.

* Description of the solution to the issue

  A concise description of how to address the issue, independent
  of any text describing why this is a good way to solve the
  problem.

* Description of advantages and disadvantages of the solution.

  Many solutions to issues will involve design tradeoffs of one form
  or another.  Recording these tradeoffs will be important, so
  future implementors of the protocol will understand the rationale
  for the decisions taken (and why certain solutions were not
  adopted).

As Chair, I will be driving this discussion forward by selecting a small
number of issues (2-3), and raising them for discussion by the mailing list.
My hope is these issues can be resolved within a week, thus clearing the
deck for consideration of new issues the following week.  This process will
allow the existing issues to be steadily resolved over the next few months.


2. Hold a face-to-face interoperability event (b*ke-off).

It is common practice for IETF protocol development communities to actually
meet in person, and hold a face-to-face interoperability event. These
typically last 2-3 days, and are attended by engineers who understand the
code base (and typically have the code and development tools with them).
During the event, various client/server interactions are tested,
interoperability issues are identified, and in some cases fixed right on the
spot. Interoperability events tend to be very high-value activities,
providing a very cost effective way to identify and remedy interoperability
issues.

At present, I'm targeting an interoperability event in the mid-June
timeframe, to be held in the San Francisco Bay Area (Silicon Valley or Santa
Cruz).

Protocol issues identified during the interoperability event will be added
to the RFC 2518 issues list, and fed into the protocol revision process.


3. Develop an online form to gather RFC 2518 implementation experience
   data.

When the HTTP 1.1 specification went to Draft standard, they employed a
Web-based form to gather information from implementors concerning what HTTP
features they supported. The form they used is still online, and can be
viewed at:

http://www.agranat.com:1998/

The results from this activity are listed here:

http://www.w3.org/Protocols/HTTP/Forum/Reports/

The online form is an effective way of gathering implementation experience.
For the HTTP 1.1 effort, it was used in lieu of holding an interoperability
event. However, Larry Masinter, the Chair of the HTTP Working Group
recommends doing both, and I agree.  The online form only reports
implementation experience, and does not give detailed information on
interoperability.  It is well known that implementing a specification to the
letter does not necessarily guarantee interoperability.  As well, the
granularity of information gathered in the online form is fairly
large-grain -- it is possible to explore issues in greater depth during an
interoperability event.  Finally, two parties can claim to implement a
feature while still carrying different mental models of the operation of the
feature.  The divergence between models will not be exposed by the form, by
may be exposed during interoperability testing.  Bottom line, both the
online form, and the interoperability event have value.

Over the next few weeks, I will be implementing a form listing features in
the WebDAV protocol, patterned after the HTTP one, and ask for implementors
to fill out the form.  Data from this effort will form a significant part of
the case I will make to the IESG concerning the state of implementation of
features in the protocol.


4. Create a farm of existing server implementations for ongoing
   interoperability testing.

One of the drawbacks of the interoperability event, and the online form, is
they only capture information at a single point in time. However, new
clients and servers are being developed all the time. It is extremely useful
for client developers to have multiple WebDAV servers that can be used for
interoperability testing during client development.  This allows
interoperability problems to be identified and fixed before the client
software is publically released.

At present, though there are 22 independent server code bases, only a small
number of these (Apache, Xythos Storage Server, CyberTeams, to name few)
provide servers where it is easy to request an account for interoperability
testing purposes.  It stands to reason that if you cannot test a particular
client/server pair, the possibility exists there might be some unexpected
interactions between them. The remedy is to make it as easy as possible to
test a client against as many servers as possible.

Over the next few months, I will work to develop a more complete set of
WebDAV servers that are available to client developers for ongoing
interoperability testing.

One factor preventing server implementations being publically available is
the need to run them on a machine outside a corporate firewall.  At many
organizations, this is a time consuming and difficult action to pull off,
and so it doesn't happen.  Since I am at a university, it is relatively easy
to place a machine on the open Internet.  The support staff in the School of
Engineering at UC Santa Cruz are quite willing to let me place machines in
our machine room, and have them running on the open Internet. However, I do
not have the funding necessary to purchase multiple machines just for WebDAV
interoperability testing.  As a result, if you have a server implementation,
and wish to have your server implementation available for interoperability
testing, please consider donating or lending a computer (and server
software) for this purpose. This is really enlightened altruism: by making
the server available for interoperability testing, more client
implementations will test against your server, and hence will work against
the server.  This can only improve the value of the server.

-------

Whew! We have a lot of work ahead of us. But, in the end, the result will be
a much stronger specification, and a much stronger standard, creating a
solid foundation for existing, and future applications.

- Jim Whitehead
Chair, WebDAV WG

Received on Tuesday, 10 April 2001 20:42:41 UTC