Re: FW: HTTP Design Issues, lessons from WebDAV's Property and Depth header experiences

From: Brent Callaghan (Brent.Callaghan@eng.Sun.COM)
Date: Thu, Dec 31 1998

Date: Thu, 31 Dec 1998 11:22:34 -0800
From: Brent.Callaghan@eng.Sun.COM (Brent Callaghan)
Message-Id: <>
Subject: Re: FW: HTTP Design Issues, lessons from WebDAV's Property and Depth header experiences

Larry Masinter writes:
>Despite the disclaimer about HTTP-NG in here, these seem like useful
>design criteria for HTTP-NG.

Thanks to Larry for forwarding Yaron's HTTP design issues writeup.

I'm a newcomer to this group, so forgive me if I'm resurrecting
old arguments or restating the obvious, but several of Yaron's
design issues reminded me of similar design considerations
in the NFSv4 working group.

1) Pipelining.

   NFS clients use pipelining through the underlying RPC
   protocol which acts as a MUX protocol.  The client can pump any number
   of unacknowledged calls down the wire, and sort out the responses
   (using the XID) of the replies that return.  Pipelining makes a huge
   difference to I/O throughput through read-ahead and write-behind.

   Pipelining by itself does not fix some of the latency problems built
   into the protocol. For instance, to read a file you need to send
   a LOOKUP request to get the filehandle, then send the filehandle
   in a READ request to get the data.  That's two round-trips to get
   data that HTTP can get in a single GET.

   Our first thought was to create a new RPC procedure that combined
   LOOKUP with READ.  Alas, this design trend leads to progressively
   "fatter", more complex calls that get larded with more an more
   options with the intent of getting the server to do more with a
   single request.

   Our current strawman design uses "compound requests" to fix the
   latency problem (getting the server to do more with a single
   request) while avoiding protocol bloat with arbitrarily complex
   operations.  You define a small set of primitive operations that
   can be combined to form more complex requests according to the
   client's needs.  The server simply executes the primitive operations
   in sequence until complete, or until one fails, then returns the
   results in a single reply.  The only context we required was for
   a filehandle that identifies the object that the primitive ops
   are acting on.  The filehandle can be changed midway through
   a compound request, so in theory a single request could act on multiple
   objects. A compound request is not required to be idempotent or atomic,
   which puts the onus on a client to limit its expectations of a 
   compound request based on the "clean-up" requirements if it fails

   For more info, see section 7. of the strawman protocol design:
   There is also a requirements document at:
   or just go to our charter page:

2) Byte Bloat and Relatedness.

   Yaron describes issues with WebDAV's PROPFIND request.  I think some
   of the relatedness issues tie back to compound requests, i.e. a
   sequence of primitive operations that act within an object context.

   For what it's worth, the NFSv4 strawman generalizes the way in
   which we retrieve file attributes.  Previous NFS versions had
   four different operations: GETATTR, FSSTAT, FSINFO, PATHCONF, 
   that each returned a pre-defined bundle of attributes.  We wanted
   a more flexible model that allowed clients to select precisely
   that attributes of interest.  We also wanted attribute requests
   and responses to be compact, since NFS is a binary protocol built
   on XDR marshalling of strings, binary integers, etc.
   We settled on a single pair of calls to get/set attributes: GETATTR
   and SETATTR.  These calls can act on multiple attributes, specified
   in a bitmap.  The bitmap identifies the sequence of attribute values
   that follows the bitmap.  The bitmap is nice for identifying the
   attributes being requested or set. You can also query the object to
   return the bitmap of supported attributes.  A compact representation
   was important because clients seem to have an insatiable appetite for
   file attributes, e.g. "Get me the attributes for every object in this

   Our current attribute proposal also includes provision for attributes
   named by strings, though these are handled similarly to the "named streams"
   in NTFS; an object can have named streams in the same way that
   a directory can have file entries.  The protocol makes no effort to
   standardize the names of these attributes.

I'm intrigued by some of the similarities between HTTP and NFS.
Both protocols provide access to a hierarchy of files (HTTP calls
them "documents").  NFS treats the files as binary data, whereas
HTTP conveys a lot more information on file content through MIME
type etc.  There's no doubt that you can do "filesystem" things
with HTTP, and "web" things with NFS or WebNFS, but I think it 
would be unwise to design a protocol that would do both of these
things with the expectation that the filesystem clients and the
web clients would both be happy.  HTTP (and I hope  HTTP-NG) are
more feature-rich and aimed at browser-style use, whereas NFS
is constrained by the filesystem APIs (POSIX and Win32) that are
expected to use it.