- From: Nico Williams <nico@cryptonector.com>
- Date: Tue, 9 Dec 2014 17:54:26 -0600
- To: Matthew Kerwin <matthew@kerwin.net.au>
- Cc: "Phillips, Addison" <addison@lab126.com>, IETF Apps Discuss <apps-discuss@ietf.org>, "uri@w3.org" <uri@w3.org>
On Wed, Dec 10, 2014 at 08:59:35AM +1000, Matthew Kerwin wrote:
> On 10 December 2014 at 08:37, Phillips, Addison <addison@lab126.com> wrote:
> > Although normalization is often a good idea... normalization might be a
> > problem if the local filesystem allows normalized and non-normalized
> > representations both to appear. You wouldn't be able to specify a
> > non-normalized representation.
>
> Do you have an example? I'm trying to think it through, but I keep going in
> circles. The one I think of is ext[2-4] where the filesystem stores octet
> sequences, and shell/applications/etc. use things like the user's locale
> environment when representing those octets as text strings. Are you saying
> that if we mandate NFC normalisation of URIs, you can't distinguish between
> a files whose filename octets are {0xE4} vs {0xC3, 0xA4} (i.e. U+00E4 "ä"
> in WIndows-1252 / UTF-8)?
>
> Wouldn't "file://%E4" would cover that?
Suppose one app doesn't normalize. And another does. The user might be
unable to type in the name of a file they want to open that exists.
This is a bit contrived because they'll be able to pick the file in a
file selection combo box, but still.
A classic example many years ago was a git repo that had such characters
in some filenames and which then broke on OS X. I don't have a link
handy.
Nico
--
Received on Tuesday, 9 December 2014 23:54:50 UTC