Re: [CSS21][css3-namespace][css3-page][css3-selectors][css3-content] Unicode Normalization

On Mon, February 2, 2009 11:18 pm, Henri Sivonen wrote:
>
>
> I think the right place to do normalization for Web formats is in the
> text editor used to write the code, and the normalization form should
> be NFC.
>

Normalisation form should be what is most appropriate for the task at
hand. There are reasons for using NFC, there are reasons for using NFD.

although if normalisation is done at the editing level, then the basic
skills and knowledge required for a web developer need to be more
sophisticated than presently available.

>
> If one is only concerned with addressing the issue for conforming
> content or interested in making problems detectable by authors, I
> think it makes to stipulate as an authoring requirement that both the
> unparsed source text and the parsed identifiers be in NFC and make
> validators check this (but not make non-validator consumers do
> anything about it).

Until UTN 11 v 3 is published i wouldn't normalise text in the Myanmar
script.

In a number of African languages it is useful to work with NFD data, esp
if you also want to comply with certain AAA checkpoints in WCAG 2.0.

Normalisation is critical to web content in a number of languages, not
just the CSS or HTML markup, but the content as well. And some content,
and some tools benefit from NFC , some from NFD. I believe that
normalisation should be supported, but forcing it to only one
normalisation form isn't optimal. Excluding normalisation also isn't
optimal.

> Validator.nu already does this for HTML5, so if
> someone writes a class name with a broken text editor (i.e. one that
> doesn't normalize keyboard input to NFC), the validator can be used to
> detect the problem.

A text editor that doesn't normalise to NFC isn't broken. An ideal text
editor gives teh user the choice on what normalisation form to use.

Andrew

andrewc@vicnet.net.au

Received on Monday, 2 February 2009 12:55:11 UTC