minutes, HTTP Working Group

Reported by Phill Hallam-Baker and Rohit Khare, notes edited by Dave
Raggett and Larry Masinter.

The meeting was held as two sessions, one in the morning with around
100 people and one in the evening from 1930-2200. Minutes for the
morning session were taken by Phill Hallam-Baker and for the evening
session by Rohit Khare.

Morning session:

Larry Masinter has become co-chair of the WG along with Dave Raggett.

Roy Fielding first presented, and we discussed, the status and plans
of the working group.

HTTP/1.0 was proposed as Best Current Practice but rejected by the
IESG because didn't describe people's view of what was 'best', and
thus it was an inappropriate status.

Roy described his proposal that HTTP 1.1 be 'fast track' and that HTTP
1.2 contain the extension mechanism and those bits that didn't make
HTTP 1.1. He explained that he wrote things into the HTTP 1.1
specification even though they might be controversial, because it was
easier to take things out than to add them in.

* After some back and forth about a variety of options, the desire for
  a stable core, and so on, the discussion led to the proposal that
  HTTP/1.0 be re-written as an Informational RFC describing current
  practice.

  However, 'current practice' does not mean that we should document
  all 50 versions of content negotiation as practices, merely use the
  core of the 1.0 document as it stands for the parts that are
  specified, and note that other features are not implemented
  consistently. 

Some other points raised:
* 1.0 is not actually much simpler than 1.1. 1.0 ignores proxies and
  gateways, however.
* all current clients/servers/proxies claim HTTP/1.0 in their headers.
  If we were to create a 'best current practice' document for
  HTTP/1.0, there becomes something for clients/servers proxies
  to conform to, but then there's no way for an agent to tell whether
  it is talking to a peer that claims conformance.
* The 1.0 specification has been useful as it is, because it is stable
  and consistent even if not complete.
* It's a bad idea to make something a standards track document and
  then obsolete it immediately. 1.0 should not be a proposed standard;
  1.1 should be.
================================================================
Dave Kristol discussed his draft:
   draft-kristol-http-state-info-01.txt

Idea to support state full sessions in a stateless protocol, Similar
to netscape cookies. Examples include shopping basket, subscription
library system (remembers what has been looked at).

Define new header State-info that carries state information between
user agent and origin server.

Requirements
    o Cache friendly
    o simple to implement
    o simple to use
    o can be deployed quickly
    o downward compatibility
    o reliable
    o protect privacy
    o support complex dialogues
    o enough cache control possible
    o minimal risks when used with non conforming caches

He went through the proposal. There is some belief that this may
'compete' with the Netscape 'cookies' method, but that is claimed not
to be sufficiently documented. Comments during the meeting:

(Larry Masinter) The privacy concern is not that the site that
initiated the session might know things about the user, but that other
non-related sites might be able to find out things about your
sessions.
(Phill Hallam-Baker) The security concern of allowing server to store
data on the clients disk should be addressed explicitly.  (Ed: this
was a subtle point)
(Alex Hopman) There is an issue with regard to servers with multiple
CGI services which do not wish to share state information.
(Dave Kristol) This is a server implementation issue.
================================================================
Rohit Khare gave an overview of his PEP proposal
 draft-khare-http-pep-00.txt.

--
1. History
   "Motivation; Existing Practice of Adding Headers; Extension/Negotiation
   experience in other IETF work areas"

2. How "extensions" work
	- feature present
	 "Modes activated by mere presence of a header: e.g. keep-alive, etc"
	- reprocess body
	 "Filter message through program with <args>: e.g. encryption"
	
3. PEP features
	- naming
	 "Packaging a standard into a single name; a la ESMTP"	
	- addressing
	 "Explaining which HTTP agents need to act on extensions; a la IPv6
	 option faulting and scoping"
	- negotiation
	 "Advertising which extensions (at which settings) HTTP agents will
	 accept; lessons from TELNET Option Negotiation"
	
	- processing
	 "Order-of-application; reuse of Content-Encoding header to form pipe"
	
4. Directions
   "W3C is developing this for security, payments application; PEP will be HTTP 
   1.2; May form the basis for integrating work of HTTP sub-working-groups"

Comments:

(John Klensin, speaking personally and not as A.D.)  The trend in the
IETF has been to strict versions something that people can read on the
back of a box. Once an extension list is in a protocol we have a
checklist. We have never handled a transition from an extensions
mechanism to a real verb well. PEP should consider migration to real
verb properly.

(Dave Crocker) Typically the view is that an extension mechanisms
allow negotiation of optional features.  A different view is to say we
want to move from here to this one place, and that the negotiation
mechanism allows movement of a functional core to a new base.  This is
Dave's view of the ESMTP work. Negotiation for combinatorial choices
does not seem to work but as a migration strategy it does.

================================================================
Don Eastlake described his Internet Draft:
	draft-eastlake-internet-payment-00.txt

He's looked at the problem of doing payments on the internet taking a
pretty general view. Need a mechanism for specifying payment systems
so that once there is an idea of prices can decide what the system to
be used will actually be. Does not attempt to constrain payments
system.  Draft allows a payment or receipt to be put into header.
Allows payments to be done over internet in a common framework.
Current plan is to recast in terms of PEP.

(Larry Masinter) In order to do reliable payments you need real (ACID)
transactions, but HTTP does not seem to have any transaction
mechanisms; e.g., you do not have ability to find out where an aborted
transaction got to.  We might expect to provide a transaction
mechanism on top of HTTP to permit this to be used.

(Don Eastlake) We should expect such matters to be handled by payment
system rather than http layer, e.g. server allows interogation of
server to find out where transaction got. Some lightweight payment
systems will not provide such guarantees, your 2 cents may just be
lost. Don attempted to restrict draft to just messages and receipts.

Dave Kristol expressed reservations about costs being embedded in
documents and URLs.

We deferred further discussion to the Internet Payment BOF.

================================================================
Alex Hopman was scheduled to discuss draft-ietf-http-ses-ext-00.txt.
However, half of that internet-draft is now in HTTP/1.1.
================================================================
Ari Luotonen said only a few words about
draft-luotonen-ssl-tunneling-00.txt. It's been out for half a year.

The WG did not come to any conclusions on this issue.
================================================================
We then talked about draft-luotonen-http-url-byterange-00.txt.

This draft inspired the work in the HTTP/1.1 draft on ranges. The
URL-method for byte ranges will be abandoned, although it may go
through an interim release by Adobe/Netscape.

The core HTTP/1.1 draft need not specify this if the methods and
additions can be done in a separate document and linked to 1.1 by
reference; this is still an option. 

One motivation for this feature was partial retrieval of PDF files;
however, PDF as currently defined in application/pdf does not have
binary offsets or byte ranges. Apparently this is a feature of a new
version of to-be-released PDF.

John Klensin pointed out that the byte range protocol should look at
checkpoint and restart experience.  It was likely that the real
problem is that document wants to be returned by parts, and that we
need to devise a way inside the document format to refer to parts and
use that for references.

================================================================
Roy Fielding then began a discussion of the HTTP 1.1 draft.

Comments in the first section included:

(Larry Masinter) He thought that caching was meant to be a transparent
optimisation technique. Nice simple semantics of http is being
interfered with by the clutter of discussing this intermediary
optimisation. The problem is that there are lots of different type of
caches and hence different types of optimisation between server and
intermediaries.  We're describing cache control headers in terms of
operational effect rather than semantic differentiation; this does not
anticipate future cache techniques.

Roy wasn't sure, these were just the header names, and the semantics
were described that way.

Someone suggested putting content negotation outside in a separate
draft.
================================================================
Session 2 ran 19:30 to 23:30

Simon Spero discussed HTTP-NG.

HTTP-NG is the first protocol endorsed by Dogbert. "Resist and you
will be shot." (Scott Adams cartooned a Dogbert on the HTTP-NG draft.)

His presentation was a shortened version of the one to be presented at
the WWW4 Conference in Boston.

The primary modifications have been:

Split the document into Architecture and Basics documents.

The basic purpose is: negotiation, meta-information, and control.

Highlights:

-Uses SCP to multiplex sessions.
-Transition strategies using DNS CNAME to indicate NG support.
-Not a superset of HTTP/1.x
-Just forwarding HTTP1.0 through ng encoding reduces packet count by 50 and speed by 180%
-negotiation and profiles. Dictionaries on either side constitute
shared state, and profiles are predefined dictionaries of prefernces.
Dictionary structure patterns can be reused in different exchanges.
-Security Key exchange
-reinventing several secrecy nd signature mechanisms
-Get put and metainformation
-HTTP/1.x metainfo + US MARC records.
-can request metainfo for included or linked objects.
-speculative sending of data can be enabled, experiments can reduce latency to 1/5th

Dave Kristol asks, why the radical departure from current practice?

A: latency, latency, latency. Optimazation for very fast and very
slow networks. Reducing number of active connections.Http/1.1 can do
persistent connections, but can't do multiplex (which increases user
percieved speed).

Kristol: you propose speculative transmission, which uses bandwidth.
Second, how many of the benefits can be captured by HTTP alone?

A: That's what these numbers are from: using 1.x messages encapsulated
   in SCP.

Alex Hopmann:So why use [your testbed], just SCP'd HTTP/1.1?

Simon: the compactnss of the records: we can put 20 cache updates in a
single packets.

Ted Hardie pointed that using DNS CNAMEs to indicate which hosts could
handle HTTPng would not work where HTTPng servers were being run by
those without the right to change DNS entries (which happens
frequently with HTTP).  It also uses the DNS as a directory service,
which the DNS community is trying to avoid.

Simon: it's a easy hack. All other schemes needed extra RTT. This
works, and cuts that out.

Q: Upgrade path: are these competitors?

A: No these are complementary. 1.x is very simple to do a quick hack
job of. At the highest levels of performance it starts breaking down.
It's not so ideal for 1M hits/hr.

================================================================
Andy Norman then reported on the NG Prototype

- SLIP over 19.2 k 115% to 140% throughput
- UK to HP-India: 140% to 160%
- US to UK through SOCKS (130%-500%)
- Compared against browser using 4 simultaneous connections w/o keep alive
- This is using straight 1.0 over SCP, without "header reuse"

================================================================
We then discussed the role of NG in this working group.

Should this be in the charter, or wait for a formalized proposal?

DSR: Comments on removing NG from the charter.

Paul Hoffman: Sounds from this presentation, this should be completely
separate track until its clearer.

Masinter: many feature have emerged in 1.1 learned from NG. We should
look to NG for experiences. Doesn't mean this should be our work item.
So far most of the work has been done outside our WG. We should
explicitly refer in the charter to paying attention to the NG work. I
wouldn't have a lot of confidence in the milestone dates and work plan
for NG given its speculative features. That's my uneasiness.

================================================================
We then returned to the discussion of 1.1:

Yes, 'host:' is required in 1.1.

Fowarded:
Q:Is there experience with loops occuring, is forwarded useful if some
gateways strip them?
A:Forwarded is still useful as a diagnostic.
Comment: if proxies remove forward headers for security reasons, then
the fact that this has been done should be indicated.
Phill: Secrecy can be obtained by using a secret hashed token: no hist
info needs to be revealed.

Discussion of the 5-second arbitrary timing and partial response.

> Jim Gettys: This problem is due to buggy clients not reading while
spewing data. The answer is to warn clients to do so.
> Roy: if the data is sent out faster than the TCP reset.  
> Ted: Using a hack like this to solve the problem contradicts a
stated design goal, making HTTP implementable over multiple transport
protocols, by building into HTTP a solution to buggy implemtations of
TCP.
> Alex: We must come up with a solution that works for TCP.
> Larry: You can put a multipart with unknown content-length. too! So
you'll still need the general fix. 

Alex: why can't you use full url when talking to the origin server?
A: Why try? why send url to a server that may reject it, but does nothing different?

Discussion of new methods:
* only applies to the new methods : OPTIONS, TRACE, and WRAPPED.
Larry: This is long list of new methods. I've seen lots of distributed
file systems. Sees like a random selection of methods from filesystems
from the past. If you want a kitechen sink, you're missing a lot. If
not, what's the rationale? HTTP doesn't have the power of ACLs,
etc. It's not good enough to put placeholders in a protocol
specification.
Phill: following Larry's points on Portmanteau toolsets: remember,
there are an incredible number of methods being implemented by
CGI-bins; only OPTIONS and Trace can be done that way. Sounds like
patch & friends can be done that way...
Roy: We should provide a consistent way for everyone to do it so that
everyone doesn't supply their own form to do these operations.
Phill: when I did CI/CO, I found what Larry found: you'll need a lot
more data to specify the total operation. 

Questions on PATCH:
It was intended that PATCH be an arbitrary data object, e.g.,
replacing a b&w film with a colorized film could be a patch? 

Other comments:

Dave Kristol: Why do we have two mechanisms for streaming: multipart
and chunked TE? [Notetaker disagreed -- it's easier to use CTE -- real
mime boundarys are hard-- not amenable to fast processing]

DMK: regarding TRACE, is it allowed to have a body? A: I can't
remember. DMK: if there is a body, is it exactly the same body
referenced, or perhaped PEPped, etc.

Harald: HTTP 1.1 looks like a case of second-system syndrome:
excessive backwards-compatiblity, and kitchen-sinking.

Roy confessed to having thrown out 4 new methods already.

Harald: I would suggest a very narrow focus on writing a basic
platform to negotiate upgrades.

Comment: note that locking/checkin/checkout is logically impossible.
================================================================
Steven Zilles made a quick announcement:

ADOBE will have a PDF demo this week that will use byte ranges via
Netscape 2.0b3 using URLs today, but partial get next year. Feb '96
release of PDF will also reveal the "hint" table. Also, a CGI script
to serve byte ranges.

Roy: Why is the rationale to modify the protocol, rather than define a
net-pdf -type?

Steve: the tools for PDF are already in place ...
================================================================
After these discussions, the chairs presented the results of their
dinner planning session:

- The HTTP/1.0 draft will be revised to become an 'informational RFC'
  which describes common current practice. Paul Hoffman (maintainer of
  the list of HTTP servers and their features) will help.
  This is primarily an editorial task, although additional material
  may be added to list features that are widely but inconsistently
  implemented (e.g., accept: headers).

- The HTTP/1.1 draft will be reviewed independently by separate
  sub-groups. The sub-groups are chartered to review the HTTP/1.1
  draft for text related to their issue, and propose changes to the
  HTTP/1.1 draft that consist of either wording changes, or movement
  of major chunks of HTTP/1.1 to separate documents, as appropriate.

  The issues are:

   * Persistent connections
      (this contains all of the 1.1 proposals for keep-alive,
       and maintaining connections to avoid TCP startup costs.)
       Alex Hoppman, Simon Spero, Mike Cowlishaw, Andy Norman,
       Scott Powers, Brian Swetlund volunteered.

   * Cache-control and proxy behavior
       Ari Luotonen, David Morris, Jim Gettys volunteered.

   * Content negotiation
       Larry Masinter, Simon Spero volunteered.

   * Authentication
       Phill Hallam-Baker, Alex Hopman, John Marchinoi,
       Stefek Zaba, Scott Powers volunteered.
       (This issue must be coordinated with WTS working
       group drafts.)

   * State management
       Dave Kristol, Rohit Khare, Scott Powers volunteered.

   * Range retrievals
       Stephen Zilles, Ari Luotonen volunteered.

   * Extension mechanisms
       Paul Hoffman, Rohit Khare, Daniel LaLiberte,
       Simon Spero, Phill Hallam-Baker volunteered.

   * other new methods and header features
       (no list of volunteers for this was gathered)

- Subgroups should conclude their work by Jan 96, in time to publish
  their conclusions (or lack thereof) to the rest of the WG by
  February 96, so that new internet draft(s) for HTTP/1.1 will be
  ready in March 96 and ready for Proposed Standard RFC status by June
  96.

- Subgroups should document meetings, progress, etc. and are
  encouraged to meet regularly, by conference, phone, etc.

- Any proposed HTTP/1.1 features not in HTTP/1.0 for which there is no
  consensus will revert to HTTP/1.0 status in 1.1 and be considered
  for inclusion in HTTP/1.2.

Alex wondered where 'logic bags' fit, and Larry suggested it should be
handled by the cache control/proxy group.

Received on Wednesday, 13 December 1995 19:59:00 UTC