- From: William A. Rowe, Jr. <wrowe@rowe-clan.net>
- Date: Tue, 16 Sep 2008 18:08:29 -0500
- To: John Cowan <cowan@ccil.org>
- CC: "Phillips, Addison" <addison@amazon.com>, "Roy T. Fielding" <fielding@gbiv.com>, Mark Nottingham <mnot@mnot.net>, URI <uri@w3.org>, Joe Gregorio <joe@bitworking.org>, David Orchard <orchard@pacificspirit.com>, Marc Hadley <Marc.Hadley@Sun.COM>
John Cowan wrote: > Phillips, Addison scripsit: > >> We have pretty good knowledge of what makes a good Unicode >> identifier. If we're going to assign variable names in a new pattern >> language, why are we limiting it to alphanum? The software we are >> linking to (the part generating the variables that get substituted in) >> may not--indeed probably does not--have that same limitation. > > Given that URIs are ASCII-only, I think it is sufficient to have > identifiers be ASCII-only too. Actually, I thought they were opaque bytestreams wrapped in ASCII, e.g. %80 or %FF in a URI should be valid in the resource path, no? I'm wondering why templates don't consider implementation in terms of RFC 3987, or at least ensure IRI compatibility, for protocols or use cases which desire it. This way some of Roy's observations with respect to a defined normalization form are honored. I'm unconcerned with the variable names being i18n, the application author determines these. It's their values that ultimately concern me :)
Received on Tuesday, 16 September 2008 23:09:11 UTC