- From: Adam Barth <w3c@adambarth.com>
- Date: Tue, 31 Mar 2009 13:55:07 -0700
- To: ietf-http-wg@w3.org
On Tue, Mar 31, 2009 at 1:37 PM, David Morris <dwm@xpasc.com> wrote: > I disagree ... encoding what is essentially a heuristic algorithm which will > need to change as content types morph into standard status is the > wrong thing to do. Certainly in the HTTP standard. In practice, sniffing algorithms tend to calcify because authors create content that works in one or more popular user agents. Browser vendors change their sniffing algorithms only after careful consideration. > I recall some months ago a 'proposal' for some kind of flag which > essentially said believe what I say or reject my content .. sniffing not > allowed. Something like that makes sense. Yes. Internet Explorer 8 will refrain from sniffing if an HTTP response contains a certain header. I can provide more details about how this works if you like. > Getting > engineers who needed to sniff in the first place to limit themselves to a > common algorithm seems unlikely. I agree that converging existing user agents on a common sniffing algorithm will take time. We've made some amount of progress on this, but its a long road. > To even follow a highly static process in > a dynamic place like the web makes no sense to me. This statement appears to argue against all Web standards. Adam
Received on Tuesday, 31 March 2009 20:55:59 UTC