Re: NEW ISSUE: content sniffing

Adam Barth wrote:
> 
> You're ignoring the reality of existing Web content.  To interoperate
> with existing Web content, a user agent must consider both the
> Content-Type headers and the content when determining the media type
> contained in a response.  To claim otherwise is fantasy.

No, it's a statement of fact.

Roy has (correctly) pointed out to you, multiple times, that your perceived
flaw of "broken web sites", are the direct result of inaccurate Content-Type
headers being sniffed by user agents and represented inaccurately.

The inaccurate headers are a direct -result- of content sniffing by the
handful of user agents which tolerate inappropriate server behavior.
You perceive this to a user agent issue.  It is a flaw in the servers.

There is no need to modify the spec (and I agree the reference to content
sniffing can be dropped altogether) to describe what the UA's have foisted
on the world.

As soon as all browsers conform to spec, authors/administrators will correct
their errors, because those errors will be obvious to them.

Ponder for a moment; 10 years ago, 1% - 3% of content could not be properly
rendered with the Content-Type headers provided.  Today, has that number
grown?  No, it's probably shrunk.

Look Adam, as soon as you describe how, by spec, I can offer an html file
as an example in a text/plain repesentation (for illustration of tags)
in spite of inappropriate behavior by IE and the host of others, I'll
respect your position.  But your rants are getting irritating.  If you
demand that IE's autodeterministic features are of value, you've lost my
attention already, because the world would not be polluted by utf7 xss
vulnerabilities if not for such obnoxious, spec-incompliant behavior.

FWIW I AM a fan of content sniffing; of local filesystem data with no other
semantic clues as to its contents.  HTTP is designed to eliminate this
requirement, but browser user agents must retain it for file:// content.

Received on Friday, 3 April 2009 02:44:09 UTC