W3C home > Mailing lists > Public > www-style@w3.org > February 2009

Re: [CSS21][css3-namespace][css3-page][css3-selectors][css3-content] Unicode Normalization

From: Anne van Kesteren <annevk@opera.com>
Date: Fri, 06 Feb 2009 11:50:17 +0100
To: "David Clarke" <w3@dragonthoughts.co.uk>, "Henri Sivonen" <hsivonen@iki.fi>
Cc: public-i18n-core@w3.org, "'W3C Style List'" <www-style@w3.org>
Message-ID: <op.uoxe131y64w2qv@annevk-t60.oslo.opera.com>

On Fri, 06 Feb 2009 11:34:28 +0100, David Clarke <w3@dragonthoughts.co.uk>  
wrote:
> If on the other hand we propose standards where the lack of  
> normalisation of is tolerated, but require late normalisation, we can  
> produce a functional result. As they stand, the normalization  
> algorithms, and checks, are fast to execute if the input is already  
> normalised to their form. With this in mind, the majority of the  
> performance hit would only come when non-normalised data is presented.

Several people seem to have the assumption that there would only be  
performance impact for non-normalized data. That is not true, for one  
because an additional check has to be made to see whether data is  
normalized in the first place. (And as I said right at the start of this  
thread, milliseconds do matter.)

(Ignoring for the moment the enormous cost and pain of changing  
codepoint-equality-checks to canonical-equality-checks in widely deployed  
software and standards...)


-- 
Anne van Kesteren
<http://annevankesteren.nl/>
<http://www.opera.com/>
Received on Friday, 6 February 2009 10:51:19 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 17:20:16 GMT