- From: Henri Sivonen <hsivonen@iki.fi>
- Date: Wed, 25 Jun 2008 13:16:32 +0300
- To: uri@w3.org
- Cc: tbray@textuality.com
Tim Bray wrote: > Also, I'm not enthusiastic about writing standards unless > there's an obvious pain point that needs to be addressed. If the > implementors are in general doing the right thing in a compatible way, > is any further spec work required? Here's a real case: Validator.nu has a feature called "Image Report". It reads a document from the Web, keeps a stack of the HTTP-level URI, <base> and xml:base context and resolves <img src> to absolute URIs according to IRI rules. Now if the input document is not encoded in UTF-8 and the the src attribute contains non-ASCII in the query string, the result no longer dereferences to the same image it would in a browser. (This is an untested statement based on the assumption that <img src> behaves like <a href>, which isn't a safe assumption.) Thus, even if the incumbent browser vendors have figured this stuff out, everyone else who seeks to consume real Web content will initially write incompatible software if reality-based conversion to ASCII-only URI isn't written down somewhere. -- Henri Sivonen hsivonen@iki.fi http://hsivonen.iki.fi/
Received on Wednesday, 25 June 2008 10:17:18 UTC