- From: Larry Masinter <masinter@adobe.com>
- Date: Sun, 31 May 2009 10:22:17 -0700
- To: Adam Barth <w3c@adambarth.com>
- CC: "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
In reply to: On Sat, May 30, 2009 at 6:06 PM, Larry Masinter <masinter@adobe.com> wrote: >> 1. Isn't the incidence of server misconfiguration far less than 10 years >> ago? Can you provide more evidence that this problem is as significant as it >> once was? Adam replied: > I don't have any data to compare current server behavior with > historical server behavior, but sniffing is required to process > approximately 1% of HTTP responses correctly. In the interest of monitoring this, and possibly removing content type sniffing in the future, is it possible to publish (and reference) the methodology used? I heard at least one question that this number might be inflated by HTML pages that were intentionally labeled as text/plain. Also, the proportion of mislabeled HTTP responses from the searchable Internet may be different from the HTTP responses from the "private" Internet behind firewalls. I don't want to question whether content-sniffing is appropriate now, but this seems to be based on current behavior which may be correctable in the future. As far as conformance requirements go, I agree with the idea that it shouldn't be necessary to mandate content-type sniffing as a normative requirement for HTML user agents, but rather advise specific classes of user agents that they MAY wish to follow this behavior IF they wish to continue to be compatible with the deployed infrastructure, to the extent that there remains a significant proportion of the deployed Internet of concern to clients. I see no justification whatsoever for allowing conforming user agents to sniff types for new elements such as <video>, or encouraging such behavior, which is just opening the door for whole other categories of spoofing. Certainly this isn't represented by any deployed infrastructure. Larry -- http://larry.masinter.net
Received on Sunday, 31 May 2009 17:22:56 UTC