- From: Nicolas Mailhot <nicolas.mailhot@laposte.net>
- Date: Fri, 21 Mar 2014 15:05:57 +0100
- To: "Julian Reschke" <julian.reschke@gmx.de>
- Cc: "Nicolas Mailhot" <nicolas.mailhot@laposte.net>, "Mark Nottingham" <mnot@mnot.net>, "HTTP Working Group" <ietf-http-wg@w3.org>, "Gabriel Montenegro" <gabriel.montenegro@microsoft.com>
Le Ven 21 mars 2014 14:54, Julian Reschke a écrit : > On 2014-03-21 14:47, Nicolas Mailhot wrote: >> ... >> I'll give you a big secret: nobody writes in percent-escaped manually if >> he can avoid it, just like nobody uses html entities. >> >> The bulk of percent-escaped urls has been produced by automatons >> converting human-written plain text that used the document main >> encoding, >> so yes I do expect both encodings to match if the automaton was coded >> properly. >> ... > > That assumes that the "automaton" that did the URI-escaping actually > knew the document encoding, and that the document the URI appears in > never gets re-encoded. Yes, sure, there will always be border cases and people will continue to invent convoluted ways to shoot themselves in the feet. But their number won't go down unless the spec clearly states URLs need to be decoded (stated this way this seems really stupid, but are we arguing about anything else?) If there were no encoding errors, there would be no need to eradicate them, and all the examples you find are one more reason to clear the swamp. -- Nicolas Mailhot
Received on Friday, 21 March 2014 14:13:37 UTC