W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > January to April 1996

Re: PERSIST: propose to make default

From: Jeffrey Mogul <mogul@pa.dec.com>
Date: Mon, 22 Apr 96 11:59:03 MDT
Message-Id: <9604221859.AA29862@acetes.pa.dec.com>
To: "David W. Morris" <dwm@shell.portal.com>
Cc: http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
    The difference is whether clients and servers can report unexpected
    closes as an error and provide diagnostic information or if the
    unexpected closes must be tolerated silently for the correct
    operation of the protocol. This change to the default behavior will
    make diagnosis of network failures even more difficult then they
    are. What exactly is the error analysis support you would expect
    from a client or server when the protocol is so blase about
    unexpected closes.

    Quite simply, my increasing discomfort with this proposed change
    stems from the change in semantic significance for what is
    basically a communications failure ... the unexpected close.

I think other people have answered your questions about the syntactical
issues.  You raise a good point, though, that we are basically hiding
additional complexity from the end-user (by expecting the client
software to retry after certain "failures") and so we could end up
burying diagnostic information, as well.

It might be instructive to draw an analogy to a demand-paged VM
system.  In both cases, "the system" hides certain faults (page
faults or TCP closes) from the "user" (a program in the VM example,
a person in the HTTP example) and silently patches things up.  This
provides the user with a relatively simple abstraction, at the cost
of some more implementation work and the possiblity of "performance
failures" if the working set exceeds the available resources.
(Note that in both the VM example and the persistent connections
example, the critical issue is whether we get enough locality of
reference to offset the cost of handling the "faults".)

Most VM systems that I know about (except perhaps those on PCs)
keep statistics related to page-fault rates and causes, and this
allows a system administrator to manage and configure the system
to optimize performance.  It would be a good idea for an HTTP
server to keep track of its actions in closing persistent connections,
along with whatever tuning knobs exist that could affect this.

For example, what fraction of connections are closed because the
system had no free connection resources?  What is the mean (and
variance) of the number of requests/connection?  Etc.

VM problems are also dealt with by changing program behavior
(or, more realistically, by trying to educate programmers about
how not to misuse an LRU page cache.)  We certainly ought to
pay some attention to this aspect of HTTP persistent connections,
but the teaching probably has to be directed at the people who
design Web services, not at the 30 million (+/- 40 million) actual
web users.

-Jeff
Received on Monday, 22 April 1996 12:12:29 EDT

This archive was generated by hypermail pre-2.1.9 : Wednesday, 24 September 2003 06:31:51 EDT