- From: Brian Campbell <lambda@continuation.org>
- Date: Wed, 1 Sep 2010 09:13:43 -0400
On Aug 31, 2010, at 9:40 AM, Boris Zbarsky wrote: > On 8/31/10 3:36 AM, Ian Hickson wrote: >>> You might say "Hey, but aren't you content sniffing then to find the >>> codecs" and you'd be right. But in this case we're respecting the MIME >>> type sent by the server - it tells the browser to whatever level of >>> detail it wants (including codecs if needed) what type it is sending. If >>> the server sends 'text/plain' or 'video/x-matroska' I wouldn't expect a >>> browsers to sniff it for Ogg content. >> >> The Microsoft guys responded to my suggestion that they might want to >> implement something like this with "what's the benefit of doing that?". > > One obvious benefit is that videos with the wrong type will not work, and hence videos will be sent with the right type. What makes you say this? Even if they are sent with the right type initially, the correct types are at high risk of bitrotting. The big problem with MIME types is that they don't stick to files very well. So, while someone might get them working when they initially use video, if they move to a different web server, or upgrade their server, or someone mirrors their video, or any of a number of other things, they might lose the proper association of files and MIME types. The real problem is that there is no standard way of storing and transmitting file type metadata on the majority of filesystems and majority of internet protocols, meaning that people need to maintain separate databases of MIME types, which are extremely easy to lose when moving between web servers. Until this problem is fixed (and this is a pretty big problem, even Apple gave up on tracking file type metadata years ago due to it's incompatibility with how other systems work), it will simply be too hard to maintain working Content-Type headers, and sniffing will be much more likely to produce the effects that the authors intended. It seems that periodically, web standards bodies decide "this time, if we're strict, people will just get the content right or it won't work" (such as XHTML with XML parsing rules), and invariably, people manage to screw it up anyhow. Sure, when the author tests their page the first time it's fine, but a mistaken lack of quoting in a comments field breaks the whole page. This causes people to migrate to the browsers or technologies that are less strict, and actually show the user what they want to see, rather than just breaking due to something out of the user's control. -- Brian
Received on Wednesday, 1 September 2010 06:13:43 UTC