Minutes, 3rd attempt

Apologies for any duplicates, but for some reason, the messages do not
seem to reach the mail archive. Here is the text of the minutes only.

Minutes  from the Birds of a Feather session about Content Negotiation
Headers in HTTP
Chair and minute-taker: Johan Hjelm, Ericsson

Introduction by the chair: The problem is that we need to create a
richer description of the capabilities of the device than what the
existing headers in HTTPallow for, if we want to do content adaptation
and content negotiation. Also, the latencies involved in multiple-pass
negotiations are a killer, especially in low-bandwidth networks. It is
better to include a little more information in the request, and allow
the server to create or select adapted content which can be returned in
the response. To accomplish this, we need a new header in HTTP that can
allow the device to declare its capabilities in the request using the
richer description formats we now have. There are also proposals for the
same type of headers in other protocols (e.g.
Draft-nishigaya-sip-ccpp-00 for SIP) but this BOF is restricted to HTTP
only.

Larry Masinter: While we are waiting for the next speaker, I could give
a bit of history. A few years ago, a browser manufacturer started
putting screen size in the HTTP UA header field. This was one impetus
behind the CONNEG working group. But there was no enthusiasm from
browser makers, since they wanted to do content adaptation through
active content. Now, the field has broadened, with new devices being
used for web access. And CONNEG has developed far enough for us to move
forward. We should do it now, since it is no real protocol before the
content negotiation is in place.

Patrick Feng, W3C: The basic idea behind the  Platform for Privacy
Preferences (or P3P for short) is to automate the finding of privacy
policies on web sites.  The basic idea is that web sites have their
privacy policy in a standard form. The P3P specification defines schema,
vocabularies, etc. The group also tried to mechanize the transfer of
policies using HTTP. Here is where they were wandering beyond standard
headers. P3P declares the header using  the HTTP extensions framework,
using a URI to point to a policy reference file, which declares where to
go for full policy. Another method is to use a well known location (like
robot.txt). Using a reference file, you can retrieve the policy and
cache it once. Browser doesn't have to fetch the entire policy, only the
40 to 50 bytes of the referens file, which is a performance
optimization.
There has been discussion in the working group whether the extension
framework is useful. It needs  two lines, but you actually only need one
extra line. The advantage is that future versions could use it, and that
the framework becomes extensible, and you avoid address collision with
other headers.  There has been religious discussions  between using it
and not use, since it is an experimental RFC. So far, there are people
on both sides, and no clear winner. The outcome is likely to have little
effect on implementors. The group is leaning towards declaring its own
header, but may use both methods.
There was an early prototype demos in New York in June. Early
prototypes  implement the  extension  framework, which was reflected in
the then latest working draft. It is probable that  those who have
implemented it will not take it out.

Hidetaka Ohto, W3C/Panasonic:
Described CC/PP (Composite Capabilities/Preferences  Profile), which
allows for a more fine grained declaration of the device, its
capabilities, and the users preferences than existing content
negotiation mechanisms. It is also more extensible. While the initiative
came from the Mobile Access Interest Group in the W3C, it is not
restricted to a specific class of devices, but can work equally well for
web browsers on any devices, including PC, mobile devices and TV sets.
Profiles are supplied by vendors, based on vocabularies created by
themselves or other organizations.
The CC/PP working group has just released  four working drafts,
describing the format, which is independent from the protocol. The
working group in the W3C is not chartered to work on the protocol.
Earlier work included the  CC/PP Exchange Protocol based on the HTTP
extension framework, which is fairly independent from the format of the
device capabilities description. Other formats can be applied. One
requirement was to reduce overhead, which meant making changes  in the
active profile lightweight and cacheable. The protocol also needed to
take into account intermediaries, like proxies. A request can be sent
with profile information as an indirect reference. Send request with
profile information, indirect reference to the profile (either in cache
or at a remote location). A further optimization is that if the user
agent changes a value, only the change is sent. The protocol can also
handle handle more complex cases, such as proxies and gateways, for
instance to add adaptive capabilities.
This specification exists at http://www.w3.org/TR/NOTE-CCPPexchange ,
but there is presently no IETF documentation. Hidetaka Ohto will submit
it as an informational RFC.


Graham Klyne, Content Technologies:
The work in the CONNEG working group is wrapping up at the moment. Much
of what I say here also come from RFC (missed the number, it is in his
presentation). Content negotiation needs to cover how the receiver
expresses capabilities, what the sender sender can send (may be
implicit), and a protocol by which the information is exchanged (there
is a more elaborate explanation in the presentation).
(Larry Masinter jumps in)There is a difference between capabilities and
characteristics. Characteristics of data resource, what it uses.
(Graham continues) CONNEG and CC/PP are both just formats, which address
slightly different areas, although there is a common ground. CONNEG
tackles some things which CC/PP does not, and the reverse. Syntax
differences can be overcome. CONNEG very specific in dealing with media
features. In part this was a requirement by the IESG. CCPP has no such
constraint.
In principle, content negotiation depends on metadata describing the
receiver, and possibly some aspects of the sender, such as which
variants exist. Important that meaning of the metadata should not be
dependent on the protocol exchanging it. A different way of putting this
is not allow the meaning of the metadata to change. Symmetry of
mechanisms may reduce round trips. Consider which party to make final
choice.
CONNEG also defined headers for the Mime context, content features
header, media features header. Any protocol that uses a Mime transport
can pass  from one Mime transport to another. This is not defined, it is
application logic, above transport. Based on the idea that you do set
interaction on some level (which CONNEG does, but not CC/PP). The basic
way to do this is applying application logic.
(in answer to a question from the floor): Authentication? That is beyond
the scope of the work in CONNEG, something that belongs in the protocol.

Paul Eastham, Network Appliance:
The ICAP protocol is intended to address what happens once you have
identified a mismatch between  client and  server capabilities. For
instance, how do you do transcoding. It does not deal with client
capabilities, nor the rules for when they are used.
It is also a really early type thing, with a beta specification
published on the web site as a proof of concept. The future is unclear,
as it has to do with issues concerning interception proxies, an entity
not allowed in HTTP. On the agenda of WREC for Friday morning.
Sample applications are content adaption/negotiation,  virus checking,
and cookie stripping.
This is not necessarily done in a transparent proxy, but can be done in
a separate iCap server. Today, it uses stock HTTP services between proxy
and ICAP resource; the intention is not to modify anything, so it uses
port 80. This means specify the involvement of the ICAP entity through
add-on headers (which could say ”it is OK to do things to my request, I
don't care”). This implies that the origin server does not have to be
modified, which is a motivation for what he describes as an early
attempt. But it is not certain that HTTP will be used in the future.
If people have trouble getting involved (i.e. the marketing piece on the
web server gets in the way), contact me.

Discussion after the presentations circled around the possibility of an
”accept-features” header in HTTP. There has been some discussion of this
in CONNEG, but not conclusive, and it is not certain what the semantics
of the description should be. Vocabulary and transport are two different
questions, which need to be separated.


Workplan:
First try to lay out specific use cases: Range of capabilities,
browsers, etc. for web browsing using HTTP. Characterize in CC/PP and
CONNEG. This should  be an individual draft which can be used as input
to the working group charter.
Then, produce a working group charter, which clearly defines the
deliverables.

Action items:
Prepare use case document draft, take to mailing list for discussion:
Johan Hjelm
Prepare draft charter, take to mailing list for discussion: Johan  Hjelm

Submit the  CC/PPexchange protocol as an informational RFC: Hidetaka
Ohto




--

ERICSSON*RESEARCH*ERICSSON*RESEARCH*ERICSSON*RESEARCH*ERICSSON*RESEARCH

     Johan HJELM, Ericsson Research, T/KA User Applications Group
                   johan.hjelm@era-t.ericsson.se
       GSM Mobile +46-708-820315 (works everywhere but in Japan)

                W3C Advisory Committee Representative
                      Chair CC/PP Working Group

                   Read more about my recent book
               Designing Wireless Information Services
                 http://www.wireless-information.net

      OPINIONS EXPRESSED ARE PERSONAL AND NOT THOSE OF ERICSSON

ERICSSON*RESEARCH*ERICSSON*RESEARCH*ERICSSON*RESEARCH*ERICSSSON*RESARCH

Received on Tuesday, 8 August 2000 11:08:57 UTC