- From: Jan Algermissen <jan.algermissen@nordsc.com>
- Date: Tue, 16 Oct 2012 13:44:52 +0200
- To: Anne van Kesteren <annevk@annevk.nl>
- Cc: Martin J. Dürst <duerst@it.aoyama.ac.jp>, Robin Berjon <robin@w3.org>, Ted Hardie <ted.ietf@gmail.com>, Larry Masinter <masinter@adobe.com>, "plh@w3.org" <plh@w3.org>, "Peter Saint-Andre (stpeter@stpeter.im)" <stpeter@stpeter.im>, "Pete Resnick (presnick@qualcomm.com)" <presnick@qualcomm.com>, "www-archive@w3.org" <www-archive@w3.org>, "Michael(tm) Smith" <mike@w3.org>
On Oct 16, 2012, at 1:29 PM, Anne van Kesteren wrote: > I'm not arguing URLs should be allowed to contain SP, just that they > can (and do) in certain contexts and that we need to deal with that > (either by terminating processing or converting it to %20 or ignoring > it in case of domain names, if I remember correctly). I am not understanding your perceived problem with two specs. There is the RFC and that is telling us what a valid URI looks like. In addition to that you can standardize 'recovery' algorithms for turning broken URIs to valid ones. Maybe with different 'heuristics levels' before giving up and reporting an error. Any piece of software that wishes to be nice on 'URI providers' and process broken URIs to some extend can apply that standardized algorith in a fixup phase before handing it on to the component that expects a valid URI. The emphasis is then on fixing to get a valid URI as early in the stack as possible and avoid the fork on software components that deal with URIs. I just don't see any need to mangle any specs. Syntax definition and fixing algorithm are orthogonal aspects, really. The belong in different specs. Jan
Received on Tuesday, 16 October 2012 11:45:22 UTC