Re: Call for compatibility testers

>>There is a difference between being compatible with older versions
>>of HTTP and being compatible with old browsers that do not implement
>>any version of HTTP correctly.
> 
> HTTP/1.0 is an informational standard.  As far as I know, this means
> that the IETF does not have an opinion on whether lynx implements
> HTTP/1.0 correctly.

I need to correct this misunderstanding. As far as the IETF (and W3C)
is concerned, RFC 1945 is the only definition of HTTP/1.0 -- there simply
is no other reference for that protocol.  The fact that it is an
"informational" specification rather than a "proposed standard" is because
we intend HTTP/1.1, a *different* but compatible protocol, to be the
proposed standard for HTTP (a family of protocols).  A specification
doesn't need to be a proposed standard in order to be the definition
of that protocol.

> You obviously have strong opinions about lynx not being a HTTP/1.0
> application, but these are not universally held.  If the conneg draft
> would continue to use the 300 code (without providing an adequate
> escape hatch) now that the lynx interoperability problems are known,
> this would effectively encode your strong opinions in the spec.  I
> don't think that would be appropriate.

I don't see how any application which fails to interpret a response
simply because it doesn't understand a valid response code (a field which
is and has always been extensible, and thus always intended to result in
new codes being unrecognized by older applications) can be considered a
valid implementation of HTTP/1.0.  It is a bug which needs to be fixed.
HTTP cannot be concerned about such bugwards-compatibility simply because
it is impossible to extend HTTP at all if we don't assume some minimal
level of functionality of the part of the applications.  Implementations
do need to be concerned with such bugwards-compatibility, but they do
so by means already established by the protocol (User-Agent and Server
identification and avoidance), not by changing the protocol every time
somebody implements it wrong.

> [...]
>>  The only correct way to deal with such browsers is to
>>exclude them on the basis of their User Agent field, 
> 
> Using this compatibility hack would be very expensive; you would have
> to disallow proxies from ever sending a cached list (300) response to
> a non-negotiating user agent, which means that you will have to handle
> all these requests yourself.  

Yes, in the short run.  However, we are concerned with the long run, and
in the long run we would expect such clients to be fixed.  Adding hacks
to an implementation is okay because all implementations are temporary;
adding hacks to the protocol means that correctly implemented applications
will suffer forever.

> The current draft does not allow you to use Vary to optimize here:
> proxies ignore the Vary header in cached list responses when using the
> network negotiation algorithm.  The spec could be extended with more
> vary rules, but I don't think this is the way to go.

I'll have to check your spec, but you need to be careful in considering
the overlapping purposes of Vary and Alternates -- Alternates does not
act as a replacement for Vary.  Vary details the dimensions of variance
for the contents of that response; Alternates details alternative
representations of the response for 2xx, 4xx (except 406), and 5xx
responses, and alternative representations of the requested resource
for 1xx, 3xx and 406 responses (assuming you are expecting it to be used
with redirections as a complex form of the Location header field). 
Therefore, there is no reason for proxies to ignore Vary on 1xx, 3xx and 406
responses (nor can they do so if they wish to be compliant with HTTP/1.1).

Does that solve the problem?


 ...Roy T. Fielding
    Department of Information & Computer Science    (fielding@ics.uci.edu)
    University of California, Irvine, CA 92697-3425    fax:+1(714)824-4056
    http://www.ics.uci.edu/~fielding/

Received on Saturday, 17 August 1996 20:55:37 UTC