This document builds upon on Character Model for the World Wide Web 1.0: Fundamentals [[!CHARMOD]] to provide authors of specifications, software developers, and content developers a common reference on string identity matching on the World Wide Web and thereby increase interoperability.
This version of the document represents a significant change from the previous edition. Much of the content is changed and the recommendations are significantly altered. This fact is reflected in a change to the name of the document from "Character Model: Normalization".
The goal of the Character Model for the World Wide Web is to facilitate use of the Web by all people, regardless of their language, script, writing system, and cultural conventions, in accordance with the W3C goal of universal access. One basic prerequisite to achieve this goal is to be able to transmit and process the characters used around the world in a well-defined and well-understood way.
This document builds on Character Model for the World Wide Web: Fundamentals [[!CHARMOD]]. Understanding the concepts in that document are important to being able to understand and apply this document successfully.
This part of the Character Model for the World Wide Web covers string matching—the process by which a specification or implementation defines whether two string values are the same or different from one another. It describes the ways in which texts that are semantically equivalent can be encoded differently and the impact this has on matching operations important to formal languages (such as those used in the formats and protocols that make up the Web). Finally, it discusses the problem of substring searching within documents.
The main target audience of this specification is W3C specification developers. This specification and parts of it can be referenced from other W3C specifications and it defines conformance criteria for W3C specifications, as well as other specifications.
Other audiences of this specification include software developers, content developers, and authors of specifications outside the W3C. Software developers and content developers implement and use W3C specifications. This specification defines some conformance criteria for implementations (software) and content that implement and use W3C specifications. It also helps software developers and content developers to understand the character-related provisions in W3C specifications.
The character model described in this specification provides authors of specifications, software developers, and content developers with a common reference for consistent, interoperable text manipulation on the World Wide Web. Working together, these three groups can build a globally accessible Web.
This document defines two basic building blocks for the Web related
to this problem. First, it defines rules and processes for String
Identity Matching in document formats. These rules are designed for
the identifiers and structural markup (markup)
used in document formats to ensure consistent processing of each and
are targeted to Specification writers. Second, it defines broader
guidelines for handling user visible text (the "Shakespeare"), such as
natural language text that forms most of the content
of the Web. This section is targeted to implementers.
This document is divided into three main sections.
The first section lays out the problems involved in string matching; the effects of Unicode and case folding on these problems; and outlines the various issues and normalization mechanisms that might be used to address these issues.
The second section provides requirements and recommendations for string identity matching for use in formal languages, such as many of the document formats defined in W3C Specifications. This primarily is concerned with making the Web functional and providing document authors with consistent results.
The third section discusses considerations for the handling of content by implementations, such as browsers or text editors on the Web. This mainly is related to how and why to preserve the author's original sequences and how to search or find content in natural language text.
This section provides some historical background on the topics addressed in this specification.
At the core of the character model is the Universal Character Set (UCS), defined jointly by the Unicode Standard [[!UNICODE]] and ISO/IEC 10646 [[!ISO10646]]. In this document, Unicode is used as a synonym for the Universal Character Set. A successful character model allows Web documents authored in the world's writing systems, scripts, and languages (and on different platforms) to be exchanged, read, and searched by the Web's users around the world.
The first few chapters of the Unicode Standard [[!UNICODE]] provide useful background reading.
For information about the requirements that informed the development of important parts of this specification, see Requirements for String Identity Matching and String Indexing [[CHARREQ]].
This section contains terminology and notation specific to this document.
The Web is built on text-based formats and protocols. In order to describe string matching or searching effectively, it is necessary to establish terminology that allows us to talk about the different kinds of text within a given format or protocol, as the requirements and details vary significantly.
Unicode code points are denoted as U+hhhh
, where hhhh
is a sequence of at least four, and at most six hexadecimal digits.
For example, the character € EURO SIGN has the code point U+20AC
.
Some characters that are used in the various examples might not appear as intended unless you have the appropriate font. Care has been taken to ensure that the examples nevertheless remain understandable.
A legacy character encoding is a character encoding not based on the Unicode character set.
A grapheme
cluster is a sequence of one or more Unicode characters that
form a single user-perceived "character".A grapheme
is a sequence of one or more characters that
form a single user-perceived "character". Unicode Standard Annex #29: Text Segmentation defines a unit called the grapheme cluster which is intended to approximately match user-perceived graphemes. A grapheme cluster roughly
corresponds to the user's perception of where the character boundaries
occur in a visually rendered text, rather than to the Unicode code
points used to encode the text in a string. A discussion of grapheme
clusters is given at the end of Section 2.10 of the Unicode Standard,
[[!UNICODE]]; a formal definition is given in Unicode Standard Annex
#29 [[!UTR29]]. What the Unicode Standard actually defines is default
grapheme clustering. Some languages require tailoring to this default.
For example, a Slovak user might wish to treat the default pair of
grapheme clusters "ch" as a single grapheme cluster. Note that the
interaction between the language of string content and the end-user's
preferences might be complex. We should standardise the way we create ids for terms. Either camelCase or with hyphens. With or without def- at start.
This document uses the word grapheme as a shorter synonym for a grapheme cluster. References to a grapheme or graphemes refer to user-perceived text units as described above.
Natural language content refers to the language-bearing content in a document and not to any of the surrounding markup or identifiers that form part of the document structure. You can think of it as the actual "content" of the document or the "message" in a given protocol. Note that the natural language content can include items such as the document title ("Much Ado About Nothing") as well as prose content within the document.
Markup is any text in a document format or
protocol that belongs to the structure or language of the format or protocol. This
definition can include values that are not typically thought of as
"markup", such as the name of a field in an HTTP header, as well as
all of the characters that form the structure of a format or protocol.
For example, <
or >
are part of the
markup in an HTML document.
Markup usually is defined by a specification or specifications and
includes both the defined, reserved keywords for the given protocol or
format as well as string tokens and, identifiers and enumerated values that are defined by
document authors to form the structure of the document (rather than
the "content" of the document).
XML [[XML10]] defines specific elements, attributes, and values
that are reserved across all XML documents. Thus, the word encoding
has a defined meaning inside the XML document declaration: it is a
reserved name. XML also allows a user to define elements and
attributes for a given document using a DTD. In a document that uses
a DTD that defines an element called <muffin>
,
"muffin" is a part of the markup.
A resource is a given document, file, or
protocol "message" which includes both the natural
language content) as well as the markupmarkup such as identifiers
surrounding or containing it. For example, in an HTML document that
also has some CSS and a few script
tags with embedded
JavaScript, the entire HTML document, considered as a file, is the
resource.
A vocabulary provides the list of reserved names as well as the set of rules and specifications controlling how user values (such as identifiers) can be assigned in a format or protocol. This can include restrictions on range, order, or type of characters that can appear in different places. For example, HTML defines the names of its elements and attributes, which defines the "vocabulary" of HTML markup. ECMAScript restricts the range of characters that can appear at the start or in the body of an identifier or variable name (while different rules apply to the values of, say, string literals).
Following example has accessibility issues and is hard to use in hard copy.
<html>
<head>
<title>Shakespeare</title>
</head>
<body>
<img src="shakespeare.jpg"
alt="William Shakespeare" id="shakespeare image">
<p>What’s in a name? That which we
call a rose by any other name would smell as sweet.</p>
</body>
</html>
Examples: Text with a gray background is markup. Text in blue is natural language content. Text in magenta are user values. we should probably use underlines or some other such, non-colour-related styling, so that printed copies are understandable. It may also be better to lighten the grey background slightly for accessibility.
I disagree that the alt text is markup - i believe it is natural language content. The id should not have a space in it.
All of the text above (all text in a text file) makes up a
resource. It's possible that a given resource will contain no
natural language content at all (consider an HTML document
consisting of four empty div
elements styled to be
orange rectangles). It's also possible that a resource will contain
no markup and consist solely of natural language content:
for example, a plain text file with a soliloquy from Hamlet
in it. Notice too that the HTML entity ’
appears in the natural language content and belongs to both the
natural language content and the markup in this resource.
This specification places conformance criteria on specifications, on software (implementations) and on Web content. To aid the reader, all conformance criteria are preceded by '[X]' where 'X' is one of 'S' for specifications, 'I' for software implementations, and 'C' for Web content. These markers indicate the relevance of the conformance criteria and allow the reader to quickly locate relevant conformance criteria by searching through this document.
Specifications conform to this document if they:
do not violate any conformance criteria preceded by [S] where the imperative is MUST or MUST NOT,
document the reason for any deviation from criteria where the imperative is SHOULD, SHOULD NOT, or RECOMMENDED,
make it a conformance requirement for implementations to conform to this document,
make it a conformance requirement for content to conform to this document.
Software conforms to this document if it does not violate any conformance criteria preceded by [I].
Content conforms to this document if it does not violate any conformance criteria preceded by [C].
NOTE: Requirements placed on specifications might indirectly cause requirements to be placed on implementations or content that claim to conform to those specifications.
Where this specification contains a procedural description, it is to be understood as a way to specify the desired external behavior. Implementations can use other means of achieving the same results, as long as observable behavior is not affected.
The Web is primarily made up of document formats and protocols based on
character data. These formats or protocols can be viewed as a set of
text files ("resourcesremove quotes, add termref link") that include some form of structural markup.
Processing such markup or document data requires string-based operations
such as matching, indexing, searching, sorting, regular expression
matching, and so forth. As a result, the Web is sensitive to the
different ways in which text might be represented in a document—and
there are many ways in which text can vary in its representation or
encoding. Failing to consider the different ways in which the same text can be
represented or encoded can confuse users or cause unexpected or
frustrating results.
Some scripts and writing systems havemake a distinction between UPPER,
lower, and Title case characters. Examples of such scripts include the
Latin script used in the majority of this document, as well as scripts
such as Greek, Armenian or Cyrillic. Most scripts, such as the Brahmic scripts
of India, the Arabic script, and the non-Latin scripts used to write
Chinese, Japanese, or Korean do not have a case distinction. you could also say: most don't but a number of important ones do, such as...
Some document formats or protocols seek to aid interoperability or
provide an aid to content authors by ignoring case variations in the
vocabulary they define or in user-defined values permitted by the
format or protocol. For example, this occurs when matching class names
betweenin an HTML document and selectors in its associated style sheet. Consider this
HTML fragment:
<style type="text/css">
SPAN.h\e9llo {
color: red;
}
</style>
<span class="héllo">Hello World!</span>
The SPAN
in the stylesheet matches the span
we should mark up things like span as keywords, rather than just code, and put translate=no attributes on them; see examples in our articles element in the document, even though one is uppercase and the other is
not.
The process of making two texts identical which differ in case but are
otherwise "the same" identical is called "case folding". Case folding make this a term definition! might, at first, appear simple. However there are variations that need
to be considered when treating the full range of Unicode in diverse
languages. For more information, Unicode [[!UNICODE]] Section 5.18
discusses case folding in detail.
Case folding in Unicode has a number of side-effects or potential side-effects on the processing of a resource. One is that case folding may not preserve the length of the original text: some mappings increase or decrease the total number of characters needed. In addition, case folding removes information from a string which cannot be recovered later.
Another aspect of case folding is that it can be language sensitive. Unicode defines default case mappings for each encoded character, but these are only defaults and are not appropriate in all cases. Some languages need case-folding to be tailored to meet specific linguistic needs. One common example of this are Turkic languages written in the Latin script.
The Turkish word "Diyarbakır" contains both the dotted and dotless
letters "i". When rendered into upper case, this word appears like
this: "DİYARBAKIR". Notice that the ASCII letter "i" maps to U+0130
(LATIN CAPITAL LETTER I WITH DOT ABOVE
), while the
letter "ı" (U+0131 LATIN SMALL LETTER DOTLESS I
) maps
to the ASCII uppercase "I".
Case-sensitive matching: code points are compared directly with no case folding. Case-sensitive matching is RECOMMENDED as the default for any new protocol or format.this needs mustardising
Case-sensitive matching is the easiest to implement and introduces the least potential for confusion, since it generally consists of a comparison of the underlying Unicode code point sequence. Because it is not affected by considerations such as language-specific case mappings, it produces the least surprise for document authors that have included words (such as the Turkish example above) in their markup.
The different forms of case-insensitive matching it's very easy to read "different forms of case-sensitive matching", which doesn't make sense - perhaps we should make more of the fact that we're now moving on to something different are useful in contexts where case may vary in a way that is not semantically meaningful or in which case distinctions cannot be controlled by the user. This is particularly true when searching a document, but also applies when defining rules for matching user- or content-generated values, such as identifierssounds like we're encouraging use for ids. i think the next sentence belongs with the next para. When defining a vocabulary, one important consideration is whether the values are restricted to the ASCII subset of Unicode or if they permit characters (such as accents on Latin letters or a broad range of Unicode including non-Latin scripts) which potentially have a more complex case folding.
ASCII case-insensitive matching compares a sequence of code points as if all ASCII code points in the range 0x41 to 0x5A (A to Z) were mapped to the corresponding code points in the range 0x61 to 0x7A (a to z). When a vocabulary is itself constrained to ASCII, ASCII case-insensitive matching can be required. However, if the vocabulary is not restricted to ASCII or permits user-defined values that use a broader range of Unicode, ASCII case-insensitive matching must not be required.more mustard
Unicode case-insensitive matching compares a sequence of code points as-if one of the Unicode-defined language-independent default case folding formscrikey! let me read that again... maybe 3 or 4 times... (see [[!UNICODE]], Section 5.18) had been applied to both input sequences. These forms are:
The above needs significant expansion - it's some of the stuff that we need to explain most clearly to spec developers and implementers. We should also add that the case folding algorithms are somewhat different from the simple upper and lower case conversions.
Language-sensitive case-sensitive matching is useful in the rare case where a document format or protocol contains information about the language of the markup and where language-sensitive case folding might sensibly be applied. In these cases, tailoring of the Unicode case-fold mappings above to match the expectations of that language SHOULD be specified and applied.<-mustardise me! These case-fold mappings are defined in the Common Locale Data Repository [[UAX35]] project of the Unicode Consortium.but we may need to help people to find them! :(
However, language-sensitive case-sensitive matching in document formats and protocols is NOT RECOMMENDED because language information can be hard to obtain, verify, or manage and the resulting operations can produce results that frustrate users.<-mustardise me!
Other kinds of variations can occur in Unicode text: some
"characters" or<-not clear whether this 'or' is an opposition or elaboration graphemes <-link to the new definition i provided can be
represented by several different Unicode code point sequences.
Consider the character Ǻ
LATIN LETTER CAPITAL A WITH
RING ABOVE AND ACUTE
. Here are some of the different ways
that an HTML document could represent this character:
Code Points | Description | |
Ǻ | U+01FA | A "precomposed" character. |
Ǻ | A + ̊ (U+030A) + ́ (U+0301) | A "base" letter "A" with two combining marks |
Ǻ | Å (U+00C5) + ́ (U+0301) | An accented letter (U+00C5) with combining mark |
Ǻ | Å (U+212B) + ́ (U+0301) | Compatibility character (U+212B ANGSTROM SIGN )
with combining mark |
Ǻ | A (U+FF21) + ̊ (U+030A) + ́ (U+0301) | Compatibility character U+FF21 FULLWIDTH LATIN LETTER
CAPITAL A ) with combining marks |
We should add a fully decomposed example, at least.
As in the first examplewhich?, each of the above strings contains the same
apparent semantic meaning (Ǻactually the name of the codepoint may be better here
), but each one is encoded slightly
differently. More variations are possible, but are omitted for
brevity: for example, any of the characters could be replaced with a
character escape i don't think escapes are relevant here
.
Because applications need to find the semantic equivalence in texts that use different code point sequences, Unicode defines a means of making two semantically equivalent texts identical: the Unicode Normalization Forms [[!UAX15]].
Document formats or protocols are sensitive to these variations
because their specifications and implementations on the Web generally
do not supply Unicode Normalization of the content being exchanged or
in the string matching algorithms used when processing the markup and
content later. Users and resourcescontent authors need to ensure that they have
provided a consistent representation in order to avoid problems later.
It can be difficult for users to assure that a given resource or set
of resources uses a consistent textual representation because the
differences are usually not visible when viewing a resource as text.
Tools and implementations thus need to consider the difficulties
experienced by users when visually or logically equivalent strings
that "ought to" match (in the user's mind) are considered to be
distinct values. Providing a means for users to see these differences
and/or normalize them as appropriate makes it possible for end users
to avoid failures that spring from invisible differences in their
source documents. For example, the W3C Validator warns when an HTML
document is not fully in Unicode Normalization Form C.some of these sentences are complicated and the para makes more than one point - i think we need to rewrite it a bit. Here's a quick suggestion below...
Document formats or protocols are often susceptible to the effects of these variations because their specifications and implementations on the Web do not require Unicode normalization of the text, nor do they take into consideration the string matching algorithms used when processing the markup and content later. For this reason, content developers need to ensure that they have provided a consistent representation in order to avoid problems later.
However, it can be difficult for content developers to assure that a given resource or set of resources uses a consistent textual representation because the differences are usually not visible when viewed as text. Tools and implementations need to consider the difficulties experienced by users when visually or logically equivalent strings that "ought to" match (in the user's mind) are considered to be distinct values. Providing a means for users to see these differences and/or normalize them as appropriate makes it possible for end users to avoid failures that spring from invisible differences in their source documents. For example, the W3C Validator warns when an HTML document is not fully in Unicode Normalization Form C. There's some mustard missing here wrt tools.
Unicode defines two types of equivalence between characters: canonical equivalence and compatibility equivalence.
Canonical equivalence is a fundamental equivalency between Unicode characters or sequences of Unicode characters that represent the same abstract character. When correctly displayed, these should always have the same visual appearance and behavior. Generally speaking, two canonically equivalent Unicode texts should be considered to be identical as text. Canonical decomposition removes primary distinctions between two texts.
Canonical Equivalence | |||
Combining sequence | Ç | ↔ | C ◌̧ |
Ordering of combining marks | q + ̇ + ̣ | ↔ | q + ̣ + ̇ |
Hangul | 가 | ↔ | ᄀ + ᅡ |
Singleton | Ω | ↔ | Ω |
Add Unicode code points to above table. Add clarifying text. Potentially add a sidebar note about Hangul's specific complexity.
Compatibility equivalence is a weaker equivalence between characters or sequences of characters that represent the same abstract character, but may have a different visual appearance or behavior. Generally a compatibility decomposition removes formatting variations, such as superscript, subscript, rotated, circled, and so forth, but other variations also occur. In many cases, characters with compatibility decompositions represent a distinction of a semantic nature; replacing the use of distinct characters with their compatibility decomposition can therefore cause problems and texts that are equivalent after compatibility decomposition often were not perceived as being identical beforehand and usually should not be treated as equivalent by a formal language.
The following table illustrates various kinds of compatibility equivalence in Unicode:
Compatibility Equivalence | ||||
Font variants | ℌ | ℍ | ||
Non-breaking | U+00A0 NON-BREAKING SPACE | |||
Presentation forms of Arabic (initial, medial, final, isolated) | ﻨ | ﻧ | ﻦ | ﻥ |
Circled | ① | |||
East Asian Width, size, rotated presentation forms | カ | カ | ︷ | { |
Superscripts/subscripts | ⁹ | ₉ | ||
"Squared" characters | ㌀ | |||
Fractions | ¼ | |||
Others | dž |
In the above table, it is important to note that the characters illustrated are actual Unicode codepoints. They were encoded into Unicode for compatibility with various legacy character encodings. They should not be confused with the normal kinds of presentational processing used on their non-compatibility counterparts.
For example, most Arabic-script text uses the characters in the Arabic script block of Unicode (around U+0600). The actual glyphs used in the text are selected using fonts and text processing logic based on the position inside a word (initial, medial, final, or isolated), in a process called "shaping". In the table above, the four presentation forms of the Arabic letter NOON are shown. The characters shown are compatibility characters in the U+FE00 block, each of which represents a specific "positional" shape and each of the four code points shown have a compatibility decomposition to the "regular" Arabic letter NOON (U+0646).
Similarly, the variations in East Asian width and the rotated bracket (for use in vertical text) are encoded as separate code points.
In the case of characters with compatibility decompositions, such as those shown above, the "K" Unicode Normalization forms convert the text to the "normal" or "expected" Unicode code point. But the existence of these compatibility characters cannot be taken to imply that similar appearance variations produced in the normal course of text layout and presentation are affected by Unicode Normalization. They are not.
Improve above examples, which are taken from UAX15, by adding more of each type and clarifying the "breaking differences" item. The table seems to be confused with "normal" character sequences—which isn't the point. Perhaps a better illustration than Unicode's is needed. 😿
These two types of Unicode-defined equivalence are then grouped by another pair of variations: "decomposition" and "composition". In "decomposition", separable logical parts of a visual character are broken out into a sequence of base characters and combining marks and the resulting code points are put into a fixed, canonical order. In "composition", the decomposition is performed and then any combining marks are recombined, if possible, with their base characters. Note that this does not mean that all of the combining marks have been removed from the resulting normalized text.
The Unicode Normalization Forms are named using letter codes, with 'C' standing for Composition, 'D' for Decomposition, and 'K' for Compatibility decomposition. Having converted a resource to a sequence of Unicode characters and unescaped any escape sequences, we can finally "normalize" the Unicode texts given in the example above. Here are the resulting sequences in each Unicode Normalization form for the U+01FA example given earlier:
Original Codepoints | NFC | NFD | NFKC | NFKD |
Ǻ U+01FA |
Ǻ U+01FA |
Ǻ U+0041 U+030A U+0301 |
Ǻ U+01FA |
Ǻ U+0041 U+030A U+0301 |
Ǻ U+00C5 U+0301 |
Ǻ U+01FA |
Ǻ U+0041 U+030A U+0301 |
Ǻ U+01FA |
Ǻ U+0041 U+030A U+0301 |
Ǻ U+212B U+0301 |
Ǻ U+01FA |
Ǻ U+0041 U+030A U+0301 |
Ǻ U+01FA |
Ǻ U+0041 U+030A U+0301 |
Ǻ U+0041 U+030A U+0301 |
Ǻ U+01FA |
Ǻ U+0041 U+030A U+0301 |
Ǻ U+01FA |
Ǻ U+0041 U+030A U+0301 |
Ǻ U+FF21 U+030A U+0301 |
Ǻ U+FF21 U+030A U+0301 |
Ǻ U+FF21 U+030A U+0301 |
Ǻ U+01FA |
Ǻ U+0041 U+030A U+0301 |
Unicode Normalization reduces these (and other potential sequences of escapes representing the same character) to just three possible variations. However, Unicode Normalization doesn't remove all textual distinctions and sometimes the application of Unicode Normalization can remove meaning that is distinctive or meaningful in a given context. For example:
U+3001
IDEOGRAPHIC FULL STOP
is used as a "period" at the end of
sentences in languages such as Chinese or Japanese. However, it is
not considered equivalent to the ASCII "period" character U+002E
FULL STOP
.8½
(including the character U+00BD VULGAR FRACTION ONE HALF
),
when normalized using one of the "compatibility" normalization
forms, becomes a character sequence that looks more like: 81/2
.Given that there are many character sequences that content authors or applications could choose when inputting or exchanging text, and that when providing text in a normalized form, there are different options for the normalization form to be used, what form is most appropriate for content on the Web?
For use on the Web, it is important not to lose compatibility distinctions, which are often important to the content (see [[UNICODE-XML]] Chapter 5 for a discussion). The NFKD and NFKC normalization forms are therefore excluded. Among the remaining two forms, NFC has the advantage that almost all legacy data (if transcoded trivially, one-to-one, to a Unicode encoding), as well as data created by current software, is already in this form; NFC also has a slight compactness advantage and is a better match to user expectations with respect to the character vs. grapheme issue. This document therefore recommends, when possible, that all content be stored and exchanged in Unicode Normalization Form C (NFC).
Roughly speaking, NFC
is defined such that each combining character sequence (a base
character followed by one or more combining characters) is
replaced, as far as possible, by a canonically equivalent
precomposed character. Text in a Unicode
encoding form is said to be in NFC if it doesn't contain any
combining sequence that could be replaced and if any remaining
combining sequence is in canonical order.this is misleading, since there are many precomposed characters that are not converted to in NFC
Document formats or protocols also generallyoften provide escaping
mechanisms to permit the inclusion of characters that are otherwise
difficult to input, process, or encode. These escape mechanisms
provide an additional equivalent means of representing characters
inside a given resource. TheseThey also allow for the encoding of Unicode characters
not represented in the character encoding scheme used by the document or for convenience of the editor.
For example, € (U+20AC EURO SIGN
) can also be encoded in
HTML as the hexadecimal entity €
or as the
decimal entity €
. In a JavaScript or JSON
file, it can appear as \u20ac
while in a CSS stylesheet
it can appear as \20ac
. All of these representations
encode the same literal character value: "€".better use nbsp, which has some justification for being escaped - the euro sign typically does not
Character escapes are normally interpreted before a document is processed and strings within the format or protocol are matched. Returning to an example we used above:
<style type="text/css"> span.h\e9llo { color: red; } </style> <span class="héllo">Hello World!</span>
You would expect that text to display like the following: Hello
world!this is lost when printed in b&w- use different styling
In order for this to work, the user-agent (browser) had to match two
strings representing the class name héllo
, even though
the CSS and HTML each used a different escaping mechanism. The above
fragment demonstrates one way that text can vary and still be
considered "the same" according to a specification: the class name h\e9llo
matched the class name in the HTML mark-up héllo
(and would also match the literal value héllo
using the
code point U+00E9
).
Unicode provides a number of special purpose control characters or
invisible markers that help document authors control the appearance or
performance of text. TIn poorly implemented applications, these characters interfere with string matching
when they are not semantically part of the text but do form part of
the encoded character sequence.
A special case is for ZWJ and ZWNJ. These invisible controls sometimes affect meaning.
Examples of these include:
Characters | Description | Examples |
ZWJ, ZWNJ, ZWSP, CGJ, etc. | zero width characters used to join or separate words or graphemes and which are common in languages that do not use spaces between words or for which the renderer needs assistance in composing characters | |
variation selectors | characters used to select an alternate appearance or glyph (see [[CHARMOD]]). These are used in predefined ideographic variation sequences (IVS) as well as generally for certain scripts (such as Mongolian). They are also used to select between black-and-white and color emoji. | |
RLI, LRI, etc |
... |
Applications that do string matching SHOULD ignore Unicode formatting controls such as variation selectors; grapheme or word joiners; or other non-semantic controls.
This section was added and needs further fleshing out. The requirement probably wants to live in the requirements section. 2015-02-07AP
add something along the lines of 'you really should use Unicode, and then you can forget about this section, but ...
Finally, resources can use different character encoding schemes,
including legacy character encodings, to
serialize document formats on the Web. Each character encoding scheme
uses different syntax?
, byte values, and sequences to represent a given
subset of the Universal Character Set.
For example, € (U+20AC EURO SIGN
) is encoded as 0x80
in the windows-1252
encoding, but as the byte sequence 0xE2.82.AC
in UTF-8
.
Specifications mainly address these resulting variations by considering each document to be a sequence of Unicode characters after converting from the document's character encoding (be it a legacy character encoding or a Unicode encoding such as UTF-8) and then unescaping any character escapes before proceeding to process the document.
The following paragraphs about normalization transcoders are "at risk". The WG feels that this requirement is difficult for content authors or implementers to verify. Needed action: verify if all of [[Encoding]] spec's transcoders are normalizing.
Even within a single legacy character encoding there can
be variations in implementation. One famous example is the legacy
Japanese encoding Shift_JIS
. Different transcoder
implementations faced choices about how to map specific byte sequences
to Unicode. So the byte sequence 0x80.60
(0x2141
in the JIS X 0208 character set) was mapped by some implementations to
U+301C WAVE DASH
while others chose U+FF5E FULL
WIDTH TILDE
. This means that two reasonable, self-consistent,
transcoders could produce different Unicode character sequences from
the same input. The [[Encoding]] specification exists, in part, to
ensure that Web implementations use interoperable and identical
mappings. However, extant transcoders might be applied to documents
found on the Web.
For content authors and implementations, it is RECOMMENDED that
conversions from legacy character encodings use a "normalizing
transcoder".mustardise me!
A normalizing transcoder is a transcoder that converts from a legacy character encoding to a Unicode encoding form and ensures that the result is in Unicode Normalization Form C. For most legacy character encodings, it is possible to construct a normalizing transcoder (by using any transcoder followed by a normalizer); it is not possible to do so if the encoding's repertoire contains characters not represented in Unicode.
This chapter defines the implementation and requirements for string
matching in markuplink me!
.
This section defines the algorithm for matching strings. String identity matching MUST be performed as if the following steps were followed:
Expansion of all character escapes and includes.
The expansion of character escapes and includes is dependent on
context, that is, on which markup or programming language is
considered to apply when the string matching operation is
performed. Consider a search for the string 'suçon'
in an XML document containing suçon
but
not suçon
. If the search is performed in a plain
text editor, the context is plain text
(no markup or programming language applies), the ç
character escape is not recognized, hence not expanded and the
search fails. If the search is performed in an XML browser, the
context is XML
, the character escape (defined by
XML) is expanded and the search succeeds.
An intermediate case would be an XML editor that purposefully provides a view of an XML document with entity references left unexpanded. In that case, a search over that pseudo-XML view will deliberately not expand entities: in that particular context, entity references are not considered includes and need not be expanded
In the Web environment, where multiple character encodings are used
to represent strings, including some character encodings which allow
multiple representations for the same thing, it's important to
establish a consistent process for evaluating string identity.sounds like unnecessary fluff
One main consideration in string identity matching is whether the comparison is case sensitive or case insensitive.
[S] Case sensitive matching is
RECOMMENDED as the default what?
for new protocols and formats.why?
However, cases exist in which case-insensitivity is desirable.
Where case-insensitive matching is desired, there are several
implementation choices that a formal language needs to consider. If
the vocabulary of strings to be compared is limited to the Basic Latin
(ASCII) subset of Unicode, ASCII-case-insensitive
matching MAY be used.because?
If the vocabulary of strings to be compared is not limited, then ASCII case-insensitive matching MUST NOT be used. Unicode case-insensitive matching MUST be applied, even if the vocabulary does not allow the full range of Unicode.
Unicode case-insensitive matching can take several
forms. Unicode defines the "common" (C) casefoldings for characters
that always have 1:1 mappings of the character to its case folded form
and this covers the majority of characters that have a case folding. A
few characters in Unicode have a 1:many case folding. This 1:many
mapping is called the "full" (F) case fold mapping. For compatibility
with certain types of implementation, Unicode also defines a "simple"
(S) case fold that is always 1:1.this para belongs in the explanatory section about case-folding?
Because the "simple" case-fold mapping removes information that can be important to forming an identity match, the "Common plus Full" (or "Unicode C+F") case fold mapping is RECOMMENDED for Unicode case-insensitive matching.
A vocabulary is considered to be "ASCII-only" if and only if all tokens and identifiers are defined by the specification directly and these identifiers or tokens use only the Basic Latin subset of Unicode. If user-defined identifiers are permitted, the full range of Unicode characters (limited, as appropriate, for security or interchange concerns, see [[UTR36]]) SHOULD be allowed and Unicode case insensitivity used for identity matching.
ASCII case-insensitive matching MUST only be applied to vocabularies that are restricted to ASCII. Unicode case-insensitivity MUST be used for all other vocabularies.
Note that an ASCII-only vocabulary can exist inside a document format or protocol that allows a larger range of Unicode in identifiers or values.
Insert example from CSS here.
These requirements pertain to the authoring and creation of documents and are intended as guidelines for resource authors.
[C] Resources SHOULD be produced, stored, and exchanged in
Unicode Normalization Form C (NFC). unless there are specific reasons for deviating, and these should be kept as local as possible. Because...
In order to be processed correctly a resource must use a consistent sequence of code points to represent text. While content can be in any normalization form or may use a de-normalized (but valid) Unicode character sequence, inconsistency of representation will cause implementations to treat the different sequence as "different". The best way to ensure consistent selection, access, extraction, processing, or display is to always use NFC.
[I] Implementations MUST NOT normalize any resource during processing, storage, or exchange except with explicit permission from the user.
[I] Implementations which transcode text from a legacy character encoding to a Unicode encoding form SHOULD use a normalizing transcoder that produces Unicode Normalization Form C (NFC).
[C] Authors SHOULD NOT include combining marks without a preceding base character in a resource.
Following examples need improvement.
There can be exceptions to this, for example, when making a list of
characters (such as a Unicode demo). This avoids problems with
unintentional display or with naive implementations that combine the
combining mark with adjacent markup or other natural language
content. For example, if you were to use U+301
as the
start of a "class" attribute value in HTML, the class name might not
display properly in your editor.
[C] Identifiers SHOULD use consistent case (upper, lower, mixed case) to facilitate matching, even if case-insensitive matching is supported by the format or implementation.
These requirements pertain to specifications for document formats or programming/scripting languages and their implementations.
[S] Specifications of text-based formats and protocols MAY specify that all or part of the textual content of that format or protocol is normalized using Unicode Normalization Form C (NFC).
Specifications are generally discouraged from requiring formats or protocols to store or exchange data in a normalized form unless there are specific, clear reasons why the additional requirement is necessary. As many document formats on the Web do not require normalization, content authors might occasionally rely on denormalized character sequences and a normalization step could negatively affect such content.
Requiring NFC requires additional care on the part of the specification developer, as content on the Web generally is not in a known normalization state. Boundary and error conditions for denormalized content need to be carefully considered and well specified in these cases.
[S][I] Specifications and implementations that define string matching as part of the definition of a format, protocol, or formal language (which might include operations such as parsing, matching, tokenizing, etc.) MUST define the criteria and matching forms used. These MUST be one of:
[S] Specifications SHOULD NOT specify case-insensitive comparison of strings.
[S] Specifications that specify case-insensitive comparison for non-ASCII vocabularies SHOULD specify Unicode case-folding C+F.
In some limited cases, locale- or language-specific tailoring might also be appropriate. However, such cases are generally linked to natural language processing operations. Because they produce potentially different results from the generic case folding rules, these should be avoided in formal languages, where predictability is at a premium.
[S] Specifications MAY specify ASCII case-insensitive comparison for portions of a format or protocol that are restricted to an ASCII-only vocabulary.
This requirement applies to formal languages whose keywords are all ASCII and which do not allow user-defined names or identifiers. An example of this is HTML, which defines the use of ASCII case-insensitive comparison for element and attribute names defined by the HTML specification.
[S][I] Specifications and implementations MUST NOT specify ASCII-only case-insensitive matching for values or constructs that permit non-ASCII characters.
The following paragraph was changed and requires WG approval.
The following requirements pertain to any specification that specifies explicitly that normalization is not to be applied automatically to content (which SHOULD include all new specifications):
[S] Specifications that do not normalize MUST document or provide a health-warning if canonically equivalent but disjoint Unicode character sequences represent a security issue.
[S][I] Specifications and implementations MUST NOT assume that content is in any particular normalization form.
The normalization form or lack of normalization for any given content has to be considered intentional in these cases.
[S][I] For vocabularies and values that are not restricted to Basic Latin (ASCII), case-insensitive matching MUST specify either Unicode C+F or locale-sensitive string comparison.
[I] Implementations MUST NOT alter the normalization form of content being exchanged, read, parsed, or processed except when required to do so as a side-effect of transcoding the content to a Unicode character encoding, as content might depend on the de-normalized representation.
The following requirement was noted by Mati as being problematic. It was not marked with mustard and needs further consideration.
[S] Specifications MUST specify that string matching takes the form of "code point-by-code point" comparison of the Unicode character sequence, or, if a specific Unicode character encoding is specified, code unit-by-code unit comparison of the sequences.
Following requirements added 2013-10-29. Needs discussion of regular expressions.
[S][I] Specifications that define a regular expression syntax MUST provide at least Basic Unicode Level 1 support per [[!UTS18]] and SHOULD provide Extended or Tailored (Levels 2 and 3) support.
For specifications of text-based formats and protocols that define Unicode Normalization as a requirement, the following requirements apply:
[S] Specifications of text-based formats and protocols that, as part of their syntax definition, require that the text be in normalized form MUST define string matching in terms of normalized string comparison and MUST define the normalized form to be NFC.
[S] [I] A normalizing text-processing component which receives suspect text MUST NOT perform any normalization-sensitive operations unless it has first either confirmed through inspection that the text is in normalized form or it has re-normalized the text itself. Private agreements MAY, however, be created within private systems which are not subject to these rules, but any externally observable results MUST be the same as if the rules had been obeyed.
[I] A normalizing text-processing component which modifies text and performs normalization-sensitive operations MUST behave as if normalization took place after each modification, so that any subsequent normalization-sensitive operations always behave as if they were dealing with normalized text.
[S] Specifications of text-based languages and protocols SHOULD define precisely the construct boundaries necessary to obtain a complete definition of full-normalization. These definitions SHOULD include at least the boundaries between markup and character data as well as entity boundaries (if the language has any include mechanism) , SHOULD include any other boundary that may create denormalization when instances of the language are processed, but SHOULD NOT include character escapes designed to express arbitrary characters.
[I] Authoring tool implementations for a formal language that does not mandate full-normalization SHOULD either prevent users from creating content with composing characters at the beginning of constructs that may be significant, such as at the beginning of an entity that will be included, immediately after a construct that causes inclusion or immediately after markup, or SHOULD warn users when they do so.
[S] Where operations can produce denormalized output from normalized text input, specifications of API components (functions/methods) that implement these operations MUST define whether normalization is the responsibility of the caller or the callee. Specifications MAY state that performing normalization is optional for some API components; in this case the default SHOULD be that normalization is performed, and an explicit option SHOULD be used to switch normalization off. Specifications SHOULD NOT make the implementation of normalization optional.
[S] Specifications that define a mechanism (for example an API or a defining language) for producing textual data object SHOULD require that the final output of this mechanism be normalized.
Many Web implementations and applications have a different sort of string matching requirement from the one described above: the need for users to search documents for particular words or phrases of text. This section addresses the various considerations that an implementer might need to consider when implementing natural language text processing on the Web other than that mandated by a formal language or document format.
There are several different kinds of string searching.
When you are using a search engine, you are generally using a form of full text search. Full text search generally breaks natural language text into word segments and may apply complex processing to get at the semantic "root" values of words. For example, if the user searches for "run", you might want to find words like "running", "ran", or "runs" in addition to the actual search term "run". This process, naturally, is sensitive to language, context, and many other aspects of textual variation. It is also beyond the scope of this document.
Another form of string searching, which we'll concern ourselves with here, is sub-string matching or "find" operations. This is the direct searching of the body or "corpus" of a document with the user's input. Find operations can have different options or implementation details, such as the addition or removal of case sensitivity, or whether the feature supports different aspects of a regular expression language or "wildcards".
This section was identified as a new area needing document as part of the overall rearchitecting of the document. The text here is incomplete and needs further development. Contributions from the community are invited.
Searching content (one example is using the "find" command in your browser) generates different user expectations and thus has different requirements from the need for absolute identity matching needed by document formats and protocols. Searching text has different contextual needs and often provides different features.
One description of Unicode string searching can be found in Section 8 (Searching and Matching) of [[UTS10]].
One of the primary considerations for string searching is that, quite often, the user's input is not identical to the way that the text is encoded in the text being searched. Users generally expect matching to be more "promiscuous", particularly when they don't add additional effort to their input. For example, they expect a term entered in lowercase to match uppercase equivalents. Conversely, when the user expends more effort on the input—by using the shift key to produce uppercase or by entering a letter with diacritics instead of just the base letter—they expect their search results to match (only) their more-specific input.
This effect might vary depending on context as well. For example, a person using a physical keyboard may have direct access to accented letters, while a virtual or on-screen keyboard may require extra effort to access and select the same letters.
Consider a document containing these strings: "re-resume", "RE-RESUME", "re-résumé", and "RE-RÉSUMÉ".
In the table below, the user's input (on the left) might be considered a match for the above items as follows:
User Input | Matched Strings |
---|---|
e (lowercase 'e') | "re-resume", "RE-RESUME", "re-résumé", and "RE-RÉSUMÉ" |
E (uppercase 'E') | "RE-RESUME" and "RE-RÉSUMÉ" |
é (lowercase 'e' with acute accent) | "re-résumé" and "RE-RÉSUMÉ" |
É | "RE-RÉSUMÉ" |
In addition to variations of case or the use of accents, Unicode also has an array of canonical equivalents or compatibility characters (as described in the sections above) that might impact string searching.
For example, consider the letter "K". Characters with a compatibility
mapping to U+004B LATIN CAPITAL LETTER K
include:
Other differences include Unicode Normalization forms (or lack thereof). There are also ignorable characters (such as the variation selectors), whitespace differences, bidirectional controls, and other code points that can interfere with a match.
Users might also expect certain kinds of equivalence to be applied to matching. For example, a Japanese user might expect that hiragana, katakana, and half-width compatibility katakana equivalents all match each other (regardless of which is used to perform the selection or encoded in the text).
When searching text, the concept of "grapheme boundaries" and
"user-perceived characters" can be important. See Section 3 of Character
Model for the World Wide Web: Fundamentals [[!CHARMOD]] for a
description. For example, if the user has entered a capital "A" into a
search box, should the software find the character À (U+00C0
LATIN CAPITAL LETTER A WITH ACCENT GRAVE
)? What about the
character "A" followed by U+0300 (a combining accent grave)? What
about writing systems, such as Devanagari, which use combining marks
to suppress or express certain vowels?
The following changes have been made since the Working Draft of 2014-07-15:
The W3C Internationalization Working Group and Interest Group, as well as others, provided many comments and suggestions. The Working Group would like to thank: Mati Allouche, John Klensin, and all of the CharMod contributors over the many years of this document's development.
The previous version of this document was edited by: