- From: David Morris <dwm@xpasc.com>
- Date: Tue, 31 Mar 2009 13:37:50 -0700 (PDT)
- cc: HTTP Working Group <ietf-http-wg@w3.org>
- Message-ID: <Pine.LNX.4.64.0903311329250.10013@egate.xpasc.com>
On Tue, 31 Mar 2009, Adam Barth wrote: > On Tue, Mar 31, 2009 at 12:51 PM, Mark Baker <distobj@acm.org> wrote: >> On Tue, Mar 31, 2009 at 3:37 PM, Adam Barth <w3c@adambarth.com> wrote: >>> When different user agents use different sniffing algorithms, content >>> authors pay a large cost, both in terms of compatibility and in terms >>> of security. For user agents that wish to perform sniffing, I think >>> we'd be doing the Web a service by specifying which algorithm they >>> should use. >> >> I agree, which is why I suggested a link from 2616bis to the >> algorithm. Do you feel that to be insufficient? If so, why? > > I don't have a strong opinion about which document should contain the > algorithm, but I think we're better off making the algorithm normative > (for those agents that wish to sniff) rather than informative. That > will help prevent developers of sniffing user agents from implementing > divergent sniffing algorithms. I disagree ... encoding what is essentially a heuristic algorithm which will need to change as content types morph into standard status is the wrong thing to do. Certainly in the HTTP standard. I recall some months ago a 'proposal' for some kind of flag which essentially said believe what I say or reject my content .. sniffing not allowed. Something like that makes sense. Sniffing, if documented/standardized, needs to be a different document. Sniffing started because of incorrect marking of response content. Getting engineers who needed to sniff in the first place to limit themselves to a common algorithm seems unlikely. To even follow a highly static process in a dynamic place like the web makes no sense to me. Dave Morris
Received on Tuesday, 31 March 2009 20:38:33 UTC