Re: NEW ISSUE: content sniffing

Are there any plans to address the root cause of the issue?

E.g. content authors not having any mechanism to specify entity 
attributes, only entity content.

Combining the entity (MIME) headers with the transport (HTTP) headers 
has created this problem.  How about splitting them out again?

It would of course require an iteration of HTTP.

But surely until the root cause is fixed, we can only apply band-aids to 
the problem.


David Morris wrote:
>
>
> On Tue, 31 Mar 2009, Adam Barth wrote:
>
>> On Tue, Mar 31, 2009 at 12:51 PM, Mark Baker <distobj@acm.org> wrote:
>>> On Tue, Mar 31, 2009 at 3:37 PM, Adam Barth <w3c@adambarth.com> wrote:
>>>> When different user agents use different sniffing algorithms, content
>>>> authors pay a large cost, both in terms of compatibility and in terms
>>>> of security.  For user agents that wish to perform sniffing, I think
>>>> we'd be doing the Web a service by specifying which algorithm they
>>>> should use.
>>>
>>> I agree, which is why I suggested a link from 2616bis to the
>>> algorithm.  Do you feel that to be insufficient?  If so, why?
>>
>> I don't have a strong opinion about which document should contain the
>> algorithm, but I think we're better off making the algorithm normative
>> (for those agents that wish to sniff) rather than informative.  That
>> will help prevent developers of sniffing user agents from implementing
>> divergent sniffing algorithms.
>
> I disagree ... encoding what is essentially a heuristic algorithm 
> which will need to change as content types morph into standard status 
> is the
> wrong thing to do. Certainly in the HTTP standard.
>
> I recall some months ago a 'proposal' for some kind of flag which 
> essentially said believe what I say or reject my content .. sniffing not
> allowed. Something like that makes sense.
>
> Sniffing, if documented/standardized, needs to be a different document.
> Sniffing started because of incorrect marking of response content. 
> Getting engineers who needed to sniff in the first place to limit 
> themselves to a
> common algorithm seems unlikely. To even follow a highly static 
> process in
> a dynamic place like the web makes no sense to me.
>
> Dave Morris

-- 
Adrien de Croy - WinGate Proxy Server - http://www.wingate.com

Received on Tuesday, 31 March 2009 21:04:16 UTC