W3C home > Mailing lists > Public > public-iri@w3.org > July 2012

Re: why use IRIs?

From: Mark Nottingham <mnot@mnot.net>
Date: Wed, 4 Jul 2012 16:13:12 +1000
Message-Id: <AC0E0BB3-1876-4461-AAB6-4CFF4DBB6422@mnot.net>
To: public-iri@w3.org
I tend to agree with Peter.

The experience of using IRIs as identifiers in Atom was, IME, a disaster. Identifiers need to be resistant to spoofing and mistakes. Exposing a significant portion of the Unicode character plane in them doesn't do anyone any good.

As a presentation element? Fine, but AFAIK we Don't Do That Here. In places where Users touch (e.g., HTML)? Sure, but We Don't Do That here.

There may be a *few* places in protocols that are user-visible, but AFAICT we're not doing a lot of new protocols recently (thank goodness).

Björn said:

> How would you like it if URIs could use only 20 of the 26 letters in the
> english alphabet and you would have to encode, decode and convert them
> all the time, or use awkward transliterations to avoid having to do so?

URIs already have a constrained syntax; you can't use certain characters in certain places. As long as people can put IRIs into HTML and browser address bars, I don't think they'll care.

Martin said:

> I think the real motivation would be people looking at HTTP traces and 
> preferring to see Unicode rather than lots of %HH strings. Of course the 
> number of people looking at HTTP traces is low, and they are not end users.

Is this use case really worth the pain, inefficiency, and very likely security vulnerabilities caused by transcoding from IRIs to URIs and back when hopping from HTTP 2.0 to 1.1 and back? I don't think so.


My English-centric .02; ŸṀṂṼ.

Regards,


--
Mark Nottingham   http://www.mnot.net/
Received on Wednesday, 4 July 2012 06:13:39 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 4 July 2012 06:13:39 GMT