- From: Jim Seidman <jim@spyglass.com>
- Date: Tue, 8 Aug 95 15:20:52 -0500
- To: Roy Fielding <fielding@beach.w3.org>
- Cc: http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
I have some concerns with how negotiation works in draft 01 as compared to draft 00. Before I list those, when were all of these changes discussed on the mailing list? I looked through all of the archives and couldn't find any discussion of this. (Nor could I find the minutes from the Danvers IETF, but even if it was discussed there, it still should have been on the mailing list.) Anyway, here are my concerns: 1. The effect of a request not having an Accept-Encoding or Accept-Charset has flipped completely from 00 to 01. This seems to make the doubtful assumption that current implementations of clients which don't produce these headers can accept any encoding or character set. 2. All of the Accept-* headers were defined in 00 as requiring at least one item. In 01 they can have 0. While RFC822 allows this, under 00 every HTTP-header had a field-body. I wouldn't be surprised if some header parsers choke on that. Requiring empty "Accept-Encoding:" and "Accept-Charset:" strings to describe the most common case for a client could break many existing servers. 3. The spec allows you to specify "Accept: " but doesn't say what the effect is. My reading seems to indicate that this means no MIME type is acceptable, but it's somewhat ambiguous. 4. The Accept example in 8.1 shows a "text/html;version=2.0" and a "text/html;level=3". Section 3.4 does not specify a list of parameters. Was the use of both "version" and "level" a deliberate attempt to show that any parameter name is valid? It's a little scary to me that someone could create an arbitrary parameter name and expect the server to parse it. (Also, in this case we need to specify a list of reserved attributes, like q, ql, mxb, etc.) We also need to specify what "more specific" 5. If qe and qc default to 0.001 instead of 0, do we provide a client with any way at all to say that it doesn't want encodings and character sets it can't handle? If a web-searching robot, for example, says that it can't handle compressed files then it probably really means that, and would probably prefer a "406 none acceptable" to something it can't receive. Yet under 01 there's no way for the robot to avoid having such content sent to it. 6. The URI description in 8.28 still doesn't address the issue I brought up back in June. Namely, how would a cache practically use the information presented in the URI as described? Since the URI field doesn't have to enumerate all of the variants that are available, it doesn't help to know what varies unless the next request has an identical entity header for that dimension. I bring this up because the 8.28 says "When the caching proxy gets a request for that URI, it must forward the request toward the origin server if the request profile includes a variant dimension that has not already been cached." In practice, as currently speced, the request must be forwarded unless a requset with an identical profile has been made. As an example, suppose that someone sends this request to a proxy server: GET http://www.bar.com/foo HTTP/1.0 Accept: image/gif;q=0.5, image/jpeg The proxy server sends it along to www.bar.com, which returns 200 OK Content-type: image/jpeg URI: <foo>; vary="type" [etc.] If someone then comes along and requests of the proxy server: GET http://www.bar.com/foo HTTP/1.0 Accept: image/gif;q=0.8, image/jpeg The proxy server needs to send this along to www.bar.com as well, since it doesn't know whether there's a gif version with a higher qs than the jpeg version which might be optimal for this request. Similarly, if someone requested: GET http://www.bar.com/foo HTTP/1.0 Accept: image/jpeg, image/xbm the proxy won't know if an xbm version is available. -- Jim Seidman, Senior Software Engineer Spyglass Inc., 1230 E. Diehl Road, Naperville IL 60563
Received on Tuesday, 8 August 1995 13:24:34 UTC