- From: Anne van Kesteren <annevk@annevk.nl>
- Date: Tue, 16 Oct 2012 14:09:06 +0200
- To: Jan Algermissen <jan.algermissen@nordsc.com>
- Cc: Martin J. Dürst <duerst@it.aoyama.ac.jp>, Robin Berjon <robin@w3.org>, Ted Hardie <ted.ietf@gmail.com>, Larry Masinter <masinter@adobe.com>, "plh@w3.org" <plh@w3.org>, "Peter Saint-Andre (stpeter@stpeter.im)" <stpeter@stpeter.im>, "Pete Resnick (presnick@qualcomm.com)" <presnick@qualcomm.com>, "www-archive@w3.org" <www-archive@w3.org>, "Michael(tm) Smith" <mike@w3.org>
On Tue, Oct 16, 2012 at 1:44 PM, Jan Algermissen <jan.algermissen@nordsc.com> wrote: > On Oct 16, 2012, at 1:29 PM, Anne van Kesteren wrote: >> I'm not arguing URLs should be allowed to contain SP, just that they >> can (and do) in certain contexts and that we need to deal with that >> (either by terminating processing or converting it to %20 or ignoring >> it in case of domain names, if I remember correctly). > > I am not understanding your perceived problem with two specs. I think your context quoting went wrong. > In addition to that you can standardize 'recovery' algorithms for turning > broken URIs to valid ones. Maybe with different 'heuristics levels' before > giving up and reporting an error. The algorithm is not for "fixing up". It's for processing URLs, including those that happen to be invalid. The end result is not always valid per STD 66. > Any piece of software that wishes to be nice on 'URI providers' and process > broken URIs to some extend can apply that standardized algorith in a fixup > phase before handing it on to the component that expects a valid URI. I do not think it makes sense to have different URL parsers (one with a "be strict" bit works). Just like it does not make sense to have two different HTML parsers in your software stack. -- http://annevankesteren.nl/
Received on Tuesday, 16 October 2012 12:09:34 UTC