- From: Anne van Kesteren <annevk@annevk.nl>
- Date: Fri, 13 Sep 2013 12:34:20 +0100
- To: John C Klensin <john+w3c@jck.com>
- Cc: Jonathan Kew <jfkthame@googlemail.com>, John Daggett <jdaggett@mozilla.com>, Addison Phillips <addison@lab126.com>, Richard Ishida <ishida@w3.org>, W3C Style <www-style@w3.org>, www International <www-international@w3.org>
On Fri, Sep 13, 2013 at 12:27 PM, John C Klensin <john+w3c@jck.com> wrote: > It is a somewhat different issue but, coming back to Jonathan's > comment quoted above, I think one can make the case that a > string that contains sufficiently invalid Unicode is an invalid > string and that all of it should be shown as hexboxes or > equivalent because the application cannot really know what was > intended and telling the user that may be better than guessing. You could make that case, but it would never happen. Apart from performance issues (long string with a lone surrogate at the end) it's not compatible with deployed web content. -- http://annevankesteren.nl/
Received on Friday, 13 September 2013 11:34:47 UTC