Re: Proposal on removing Content Negotiation from http 1.1

On Wed, 24 Jan 1996, Peter J Churchyard wrote:
> I beleive that very little content negotiation takes place

Content negotiation is a subtle science - it's much sexier to talk about 
security, new HTML tags, and client-side scripting than it is to talk 
about supporting a framework for the graceful evolution of the web.  Why 
should a browser company make it easy for people to not have to say "you 
must be using Browser X to view these pages!" - indeed.  

It is as much an educational problem as a technical one.  Yes, the 
mechanism described in 1.0 is only half-way complete - that will be fixed 
in 1.1.  The average user isn't even aware that content negotiation could
provide a smooth solution to the problem of being able to simultaneous ly 
be on the bleeding edge of web technology *and* provide a usable set of 
pages for all.  I have been trying to battle that lack of awareness. [1]

While content negotiation currently doesn't work for evolving HTML, we 
have been using it to provide inlined images according to the browsers 
that support it, and this works for any browser vendor who takes their 
Accepts: halfway seriously.

> The assumtion that current content negotiation makes is that there is
> no prior knowledge. In most cases a user will have selected a specific 
> URL in a previous page they have retrieved.

Hmm, maybe some big picture is needed.  Indulge me.

A useful dataspace is one that represents information at a very high level -
for example, represents the structure of information over its presentation
(SGML) or one whose addresses indicate information rather than a particular
rendering of that information.  http://www.organic.com/ should not be
considered as "the HTML home page for Organic" - just "the home page for
Organic", and whether you visit that link using an HTML 3.0 browser or an
HTML 2.0 browser or a VRML browser or a PDF browser or whatever, you should
be able to get some representation of that information.  There is always 
some "high level" representation of that data - whether it exists as an 
SGML document or a thought in our heads - and the "rendering" of that 
information into a particular content-type suitable for public 
consumption is part of the art of communication.  

So, "content negotiation" is a necessary part of any type of information 
communication - as the communicator, I need to know how to speak to 
you, so I can get my message through.  I'm willing to accept a certain 
amount of entropy to communicate this - a GIF might be an 
acceptible degradation from a JPEG, an HTML 2.0 document might be an 
acceptible degradation from an HTML 3.0 document, if it meant that the 
alternative was to not get the information at all (or in a format you 
can't understand).  

The users out there clearly want content negotiation.  Because browser 
authors don't take content negotiation seriously, many very complex 
systems have been built on top of the User-Agent variable - the practice 
is so widespread that at least three browsers have decided they need to 
call themselves "Mozilla" in order to get content designed for Netscape 
1.1.  

> This proposal removes large areas that are causing concerns in the current
> review process 

Actually, the problem being faced is not content negotiation at the MIME 
content-type level - there hasn't been much dispute on that count.  The 
problem has been to determine how content negotiation can handle the 
microrevisions of a particular content type, i.e. adding tags piecemeal 
to HTML.  I outlined this problem and three solutions in a post to 
www-talk a few weeks ago [2] - to which I distressingly didn't get much 
response.  The report from the conneg subgroup will speak some more to 
this, I predict.

> No intelligent server mechanism is a intelligent as the dumbest human..

The goal is actually to discourage as much site-specific intelligence as
possible, or at least to make it unnecessary - and to make the requirements
on proxies easy as well.  The intelligence definitely belongs at the client. 
User-Agent based negotiation is unacceptible to have as a baseline, 
though we realize some people will always want to stay 
bugwards-compatible.

> If you feel that it is a must then the simplest case is to return a
> precanned entity that in some standard format lists the URL's and the 
> content types each represents and then let the client do all the work.

That's "reactive negotiation", and is enabled by the URI: header having some
pretty nifty semantics to it.  However, many people feel that multiple
representations of content will be the norm instead of the oddity,
particularly if servers can perform conversions when needed, so the overhead
of always doing reactive negotiation would be punishing.  And thus,
"preemptive negotiation" is needed, so that the browser can have some idea of
what kinds of information are acceptible.  R-C is a safe fallback, and
exactly why clients don't have to list *everything* they support in their
Accept: headers (and definitely why they shouldn't Accept:  */*), but the
norm should be to be preemptive. 

Look for reference implementations soon on the server side.  On the browser
side... well, you can see one browser company's recommended solution to
content negotiation here[3]. 

	Brian


[1] - <URL:http://www.organic.com/Staff/brian/cn/>
[2] - <URL:http://www.eit.com/www.lists/www-talk.1996q1/0018.html>
[3] - <URL:http://melroseplace.com/shockwave/>

--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--
brian@organic.com  brian@hyperreal.com  http://www.[hyperreal,organic].com/

Received on Wednesday, 24 January 1996 15:44:33 UTC