- From: Larry Masinter <masinter@parc.xerox.com>
- Date: Fri, 7 Feb 1997 12:39:31 PST
- To: w3c-dist-auth@w3.org
Del's message seemed to have attached the minutes as uuencoded data, and the hypermail software archive doesn't cope with that, so here's my translation of what was sent: WEBDAV Meeting Notes January 27, 28 1997 U.C. Irvine, Irvine CA An open meeting was held for those interested in the WEBDAV initiative. The meeting was held on the campus of the University of California, Irvine. The primary focus of the meeting was to review and discuss a proposed draft specification. Chair: Introductions and general remarks. Web Collections .................. Yaron Goland Presentation: WC presented as a mechanism for giving structured response to an HTTP request that is machine readable, without breaking older clients. WC is encoded as a set of HTML tags with some simple semantics. Question: Why HTML? Why not straight up HTTP? Answer: Single message type for both "pure" HTTP and HTML downstream. Presentation (continued): WC data can be 1. referenced - <WCAT rel=foo HREF="anchor"> 2. hidden - using HDATA 3. explicit - between <WCDATA> ... </WCDATA> tags Comment: So you will have to encode binary data? This is expensive. A General Discussion of the Issues Ensues: Question: Why not use "webmaps" or "sitemaps"? Answer: WC is the same initiative as "web/site maps". Just the name has changed. Question: (Clarification of above question:) So why tinker with the original "webmap" spec? Answer: Spec was inadequate. Comment: This is bad design: 1. WC is a flawed mechanism 2. not good overlap between machine readable/human readable response Response: Focus on the object-model. Is it complete? Consistent? Web Collections are just a convenient vehicle for expressing the object model; if there is a better way of expressing the model, we'll use it. 1Hubub: A protracted discussion on the merits/folly of using HTML to encode an object model for structured HTTP response ensues. Chair: Moves to discuss the Structured Response Object Model independently of encoding. Response: Motion fails. More discussion as above. --------- break --------- Yaron: Do we all agree that: 1. we need a well defined Structured Response Object Model and, 2. we should have no dependencies on incomplete/ongoing work in other working groups? [A short discussion period on the above points. Editor senses group general agreement on 1., 2. above, although no official poll is taken.] Presentation (continued): Hierarchical Collections are presented as a special case of Web Collections, used to support FCS or "directory" like behavior. Light Links (Meta Data) .............. Jim Whitehead Presentation: Explanation of link structure (source, dest, type) as an expression of a binary relation on the cross product RESOURCE X RESOURCE. Question: Who manages the link "types" (e.g., registration)? Answer: Core link types ("core" to implementing WEBDAV) are defined in the specification. Other "types" owned and managed various groups (e.g., Dublin Core). Namespace convention for link types is schema.(schema.type). Response: Take care! Define namespace requirements for links so that WEBDAV complies with the "Schema" work group (Chris Weider, Chair). Comment: Why be non-extensible w.r.t. the link definition? Allow other fields to be added to the core fields (source, dest, type). Comment: PICS is doing meta-data. The authors would be well advised to look at the PICS effort. 1 - Shorthand for "unstructured discussion". No disrespect for the group or the Chair is intended. Presentation (continued): The LINK method is presented. Comment: Method name conflicts with the LINK in HTTP/1.1 appendix. Comment: Links as presented are inadequate for annotation. --------- lunch --------- Question: Are links discoverable/extensible in a robust way? How does the Schema/Method stuff tie in? Comment: Look at PICS namespace registration stuff; it may have good ideas for schema registration. Presentation (continued): UNLINK method presented. Question: Should first rev WEBDAV be dodging notification/transaction issues? Comment: This link model may not be efficient on the wire (Keith will write down his concerns on this issue and submit to the list). Presentation (continued): The convenience methods GETLINKS, GETLINKVAL and SETLINKVAL are presented. Question: What identifies a link? Answer: The triple (source, dest, type) is the unique ID. Question: Must either source or dest equal the URI of the resource which contains the link? Answer: Yes, the authors (somewhat arbitrarily) decreed that source or dest should equal the URI of the resource where the link resides. Response: (Roy) This restriction should be lifted. The LINKSEARCH Method ............... Yaron Goland Presentation: The LINKSEARCH method is briefly presented. Comment: Scoping rules for search are inadequate. Comment: Use "agent" rather than "arbiter". Comment: BNF errors need to be cleaned up. Comment: Interaction between the resource namespace and links (which are defined on resources) can be problematical in a distributes setting. Comment: BNF should be modified to reflect extensible link structure. Comment: Design team is shortsighted due to focus on schedule constraints. Is the team doing its homework? Do the team members understand the issues? Are the team members capable of understanding the logical consequences of their design decisions? --------- break --------- Locking ...................... Steve Carter Presentation: The difference between "checkout" (for version control) versus "lock" (for document repository) is explained. Lock is intended to control resource "collision"; lock is not an access control mechanism. Question: Why can't lock tokens cross "client space" boundaries? This seems to be an arbitrary decree. Comment: Range locking as described abuses the HTTP/1.1 notion of range. Why not address ranges with teir own URI? Comment: Agrees with comment above. Design team broke its own rule (WEBDAV acts only URI addressable objects). URI addressable ranges more compatible with HTTP philosophy. Also, how do entity tags differ from lock tokens, functionally speaking? Comment: What is WEBDAV locking in the context of highly replicated resources? Hubub: Much discussion over replicated, distributed resources. Chair: Requests that range locking be taken to the discussion list, noting the following outstanding issues: 1. locking highly replicated resources, 2. advisory lock vs. exclusive lock, 3. support for "graceful degradation" of functionality. Comment: With respect to above recommendation by chair; the proper forum for resolving design issues in general is on the list. IETF is an open forum. Question: What about "orphaned" lock tokens? Spec should say something about the conditions that can lead to lost tokens. -------------------------------------- Special Presentation *** Email Access to WEBDAV Functionality -------------------------------------- Email & WEBDAV ................. Einar Stefferud Presentation: Do not ignore mail transport level! Push vs pull models; mail has some pluses: 1. Latency can work for you 2. Administrative functions - mail more autonomous, 3. Mail is mature, tested technology. Industry has evolving, Jungian mindset: 1. Computing - no connectivity, 2. Networking - Autonomous homogeneous nodes, 3. Interneteorking - Autonomous heterogeneous nodes, 4. Interworking - Autonomous heterogenous distributed processes. --------------- Day Two --------------- Version Control .................. Jim Whitehead Presentation: Review of "champion" models. Presents the "version tree" as conceived by the design team, with "default published version" (dpv), "history" links and "version tree handle". Comment: Tree handle and the dpv: if we access the dpv through tree handle for some methods, how do we get to the handle itself? Response: Can't use redirect; we get pushback from the Server Config crowd. Question: Which methods act on dpv via tree handle? Which do not? Is there consistency here? Hubub: Extended discussion on what version "control" means in a distributed environment. Chair: Directs that a discussion of Version Control in a distributed, replicated environment be established on the list. Question: Does the design accommodate "derivitaive" work? Question: Where is the definition of "history" in the spec? Also, "history" may itself be a distributed object; does spec recognize this possibility? Comment: Spec assumes that server enforces a version control model. Shouldn't the client be asserting the model? At least some provision should be made for client setting policy rather than assume server always does so. Comment: Design should support derivative work in widely distributed context (i.e. design should support derivative work with client asserting policy). Comment: The concept of "history", dpv, indeed much of the spec assumes central point of control. Monolithic thinking defeats the long-term vision of "Interworking". Don't be seduced into taking the easy way of server-centric, single point of control - the way of the Dark Side. Comment: Think in terms of "authoritative" vs "non- authoritative" sources with respect to where things reside (e.g., "Clear Case"). Comment: Interoperability is the IETF gold standard by which to measure the elements of any protocol. In this case, provision should be made for the client to set version policy, and it should be recognized that the "tree" may well be a highly distributed object with many locally idiosyncratic representations. The tree might not reside in a single, absolute reference frame; but the client should be able to assert any logically consistent framework from which to view the tree (in whole or part)that can be expressed as a "well formed" policy. Comment: WEBDAV should not exclude server/server transactions. Fine if WEBDAV doesn't define such transactions. Comment: Design team should focus on the needs of the Internet when writing spec; may help shift emphasis to client/interoperability. Why not split the spec? A monolithic spec leads to monolithic design! Presentation (continued): The CHECKOUT method is presented. Discovery of method capability is deferred to a later presentation. Question: Checkout function is too complicated. Must one do discovery? Seems to require three-step procedure. Answer: One need only ask once for server support of method capability. Question: "Derived-From", wasn't this supposed to be addressable? What happend here? Question: Where is "UNCHECKOUT"? --------- lunch --------- Chair: Leads a discussion on whether to seek official status as a working group with the IETF, the W3C, or both. Much discussion. Chair: Moves for a vote on the following: Shall WEBDAV proceed to seek IETF working group status with all due speed? Response: Motion seconded. The Ayes have it. Chair: Moves that the group take ten minutes discussing the following proposal: Should we split the draft spec? Response: Motion seconded. Passed. Much discussion of pros and cons. An emerging consensus to re-evaluate the situation when IETF working group status is attained. Chair: When should we schedule WEBDAV work group meeting at Memphis? Response: General response indicates Monday morning would be best. Chair: Directs editor to submit current draft to IETF asap. Chair: Directs editor to post meeting notes asap, with above directive taking precedence. Namespace Manipulation ......... Asad Faizi, Del Jensen Presentation (Asad): Asad briefly presents COPY MOVE etc. Comment: The statement "byte move or anything else" should be "byte move and anything else". Comment: Discoverability is fine, but how can a client enforce consistency? Comment: Having different things happen at different servers is a recipe for disaster. Hubub: On the meaning of the phrase "byte for byte copy", much discussion. What is COPY in Web context? The phrase "byte for byte" not sufficiently abstract. Should be phrased without reference to encoding or representation of the resource data. People who cannot grasp this abstraction have no business writing the specification. Comment: Change the name of COPYHEAD. Comment: WEBDAV header names should not mimic other header names. Too confusing, even if context resolves outright collisions. Comment: Should discuss on the list whether COPY/MOVEHEAD should be in spec. [Ed. Note: The Chair also took notes on Namespace:] The methods copyhead and movehead also elicited some discussion. The rationale for these methods was stated as being a way for clients to discover, before the method is performed, the consequences of a copy or a move. Participants pushed back on this, stating that all that was needed was a way for clients to determine what happened after a method was performed. The need to discover ahead of time which links would be copied/moved was also raised, and there was some discussion on why predefined links might change from one part of the namespace to another. [Ed. Note: End of Chair's Notes on Namespace] UNDELETE, DESTROY .................. Del Jensen Presentation: Brief overview of semantics. Comment: HTTP/1.1 DELETE semantics imply all access to an object via HTTP is removed. Therefore UNDELETE-ing a DELETEd URI does violence to the HTTP object model. Hubub: Much discussion on UNDELETE and destroy, at the end of which there appears to be an emerging consensus that UNDELETE and DESTROY are not appropriate WEBDAV functionality. --------- break --------- Lauren: Informs the group that the XML draft spec is available at <http://www.w3.org.pub/www/tr/wd-xml-961114.html>. [Ed. Note: The Chair kindly took notes on the following presentation] Method Capability Discovery ............ Yaron Goland Presentation: Yaron gave an overview on his proposal (which he stated had not been previously agreed to by the design team) for method capability (interface) discovery, which he termed "schema discovery." Comments: Participants noted that the functionality described for capability discovery in DAV was similar to the proposed functionality for the Protocol Extension Protocol (PEP) for HTTP, and there was a proposal to remove the capability discovery mechanism from the WebDAV specification and work it into a general-purpose HTTP extensibility mechanism. One participant also noted that there was a similar set of functionality being proposed by the Internet Printing group. Another observation made about the capability discovery mechanism was that it shared some similarities to an RPC-type system, and was half of an interface definition language. Participants questioned whether this level of generality was needed. This then led into a high-level discussion on the need for a capability discovery mechanism. What came out during the discussion was that the design team had assumed a model where clients would have to discover the capabilities of a server, then adapt themselves to the current capabilities of the server. Several participants stressed that a better model to adopt would be to have simple clients able to operate with a variety of back ends (with a minimum of adaptation). Several participants also noted that by having so many different functionality options, it was difficult to determine the core DAV functions that all clients can depend on. At the end of this session, there was a poll of opinion across the participants, where each participant was able to express their opinion on capability discovery. The sentiment exposed by this was that the capability discovery mechanism in the current specification is too complex, and too powerful for WebDAV needs. Furthermore, the sentiment was expressed that it would be good if the WebDAV specification could bemodified so that capability discovery isn't necessary at all. ---------------- End Of Meeting ----------------
Received on Friday, 7 February 1997 16:39:48 UTC