- From: Julian Reschke <julian.reschke@gmx.de>
- Date: Sun, 06 Jul 2008 12:30:39 +0200
- To: Ian Hickson <ian@hixie.ch>
- CC: Sam Ruby <rubys@us.ibm.com>, HTTP Working Group <ietf-http-wg@w3.org>, "public-html@w3.org" <public-html@w3.org>, e_lawrence@hotmail.com
Ian Hickson wrote: > The only way you get get it to _not_ work in all the other browsers would > be for all the browsers to be updated to support this simultaneously, with > a simultaneous launch, and have the entire installed base upgraded at the > same time. In practice, more than 25% of the install base still uses _IE6_ > today. The amount of time between when a feature can first be used and > when a feature cannot be copy-and-pasted by an ignorant author who isn't > using the latest browsers is several _years_. That's plenty of time to > poison the well and ruin the chances of the new feature getting deployed > across all browsers. It seems to me that you're totally missing the point here. Before HTML5, the specifications told UAs to respect the mime type (see HTTP, MIME, WebArch...). UAs are known not do this, and to vary in how they implement sniffing. That means that today, a content author has no way to ensure that recipients will not do content sniffing. In many cases, it doesn't matter. In some, it does. Giving content authors more control about what recipients seems to be a good thing to me, even if it only works in one of the major UAs first. In particular, if that one is known to do the most content-sniffing today. Yes, I'd prefer all of this not to be necessary. The less sniffing is done, is better. Therefore I'd encourage everybody to try to get the number of cases as small as possible. And no, I'm not convinced that a content-type parameter is the best approach, in particular as I don't see how it can be registered properly. A new response header may be better. And yes, I'd prefer if Microsoft would submit proposals like this to a public forum, instead of just telling us "this is what we're going to do" (the canvas way :-). >>> The way out of this mess is containment. We define a strict set of >>> Content-Type sniffing rules that are required to render the Web, and >>> we get the browsers to converge on only sniffing for those. ... >> So you can get the browser vendors to converge on a precise set of >> sniffing rules, but you can't get them to agree on an opt-out? > > The precise set is the set that is compatible with rendering the legacy > content as expected, the minimal subset compatible with what browsers do. > It can also be changed in response to browser feedback when it is > discovered that it isn't quite perfect. It is far easier to incrementally > move towards a set that is trying to be compatible with what the browsers > already do than it is to get the browsers to jump to an extreme. I wouldn't consider trusting the server supplied content type an "extreme." > ... >> This leads to the question: what is the essential difference between >> "text/plain" as defined by the spec and therefore is presumed to be >> workable (despite all the evidence to the contrary), and >> "authoritative=true" which is being rejected out of hand as unworkable. > > text/plain might not be workable. If Opera and Safari find they have to > change as well, then the spec will have to change too. ...I don't think this answers Sam's question. What's the difference between considering the encoding as input, but not another parameter? BR, Julian
Received on Sunday, 6 July 2008 10:31:25 UTC