W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2012

Re[2]: Straw-man for our next charter

From: Adrien W. de Croy <adrien@qbik.com>
Date: Sun, 29 Jul 2012 22:59:42 +0000
To: "Larry Masinter" <masinter@adobe.com>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
Message-Id: <em704e4b8a-ca78-4787-810d-6b51e6587714@bombed>

We see this problem a lot at the gateway.  We have processing agents 
that only want to process say text/html, and really don't like getting 
streamed MP4s labelled as text/html by some brain-dead server

But in the end, where does the server get the C-T from?  Most just do a 
map lookup on file extension.

Even if we tried to push the meta-data into the resource itself, so it 
could be specified by the actual author (think about the hosted site, 
where the site maintainer has no control over content types the server 
will send, or not easily), then how do we trust that information?  Some 
attacker can label whatever content as whatever type if they can find 
some purpose to do so.

In the end, I think it basically makes Content-Type largely unreliable. 
  I don't see this changing with 2.0 (at least not properly), unless we 
introduce the concept of trust - either sign content by someone 
vouching for its type, or run RBLs of known bad servers.

Do we even need C-T if clients are sniffing anyway?


------ Original Message ------
From: "Larry Masinter" <masinter@adobe.com>
To: "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
Sent: 29/07/2012 3:01:08 a.m.
Subject: RE: Straw-man for our next charter
>The sniffing I was in particular hoping to stop is content-type sniffing.
>" Many web servers supply incorrect Content-Type header fields with
>  their HTTP responses.  In order to be compatible with these servers,
>  user agents consider the content of HTTP responses as well as the
>  Content-Type header fields when determining the effective media type
>  of the response."
>If browsers suddenly stopped sniffing HTTP/1.1 content, it would break existing web sites, so of course the browser makers are reluctant to do that.
>However, if it was a requirement to supply a _correct_ content-type header for HTTP/2.0, and no HTTP/2.0 client sniffed, then sites upgrading to HTTP/2.0 would fix their content-type sending (because when they were deploying HTTP/2.0 they would have to in order to get any browser to work with them.)
>Basically, sniffing is a wart which backward compatibility keeps in place. Introducing a new version is a unique opportunity to remove it.
>The improved performance would come from having to look at the content to determine before routing to the appropriate processor.
>-----Original Message-----
>From: Amos Jeffries [mailto:squid3@treenet.co.nz]
>Sent: Friday, July 27, 2012 11:53 PM
>To: ietf-http-wg@w3.org
>Subject: Re: Straw-man for our next charter
>On 28/07/2012 6:39 p.m., Larry Masinter wrote:
>>re changes to semantics: consider the possibility of eliminating
>>"sniffing" in HTTP/2.0. If sniffing is justified for compatibility
>>with deployed servers, could we eliminate sniffing for 2.0 sites?
>>It would improve reliability, security, and even performance. Yes,
>>popular browsers would have to agree not to sniff sites running 2.0,
>>so that sites wanting 2:0 benefits will fix their configuration.
>>Likely there are many other warts that can be removed if there is a
>>version upgrade.
>Which of the several meanings of "sniffing" are you talking about exactly?
Received on Sunday, 29 July 2012 23:00:10 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:03 UTC