Copyright © 2013 W3C® (MIT, ERCIM, Keio, Beihang), All Rights Reserved. W3C liability, trademark and document use rules apply.
CSS is a language for describing the rendering of structured documents (such as HTML and XML) on screen, on paper, in speech, etc. This module describes, in general terms, the basic structure and syntax of CSS stylesheets. It defines, in detail, the syntax and parsing of CSS - how to turn a stream of bytes into a meaningful stylesheet.
This is a public copy of the editors' draft. It is provided for discussion only and may change at any moment. Its publication here does not imply endorsement of its contents by W3C. Don't cite this document other than as work in progress.
The (archived) public mailing list www-style@w3.org (see instructions) is preferred for discussion of this specification. When sending e-mail, please put the text “css3-syntax” in the subject, preferably like this: “[css3-syntax] …summary of comment…”
This document was produced by the CSS Working Group (part of the Style Activity).
This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
The following features are at risk: …
This section is not normative.
This module defines the abstract syntax and parsing of CSS stylesheets
and other things which use CSS syntax (such as the HTML style
attribute).
It defines algorithms for converting a stream of codepoints (in other words, text) into a stream of CSS tokens, and then further into CSS objects such as stylesheets, rules, and declarations.
This module defines the syntax and parsing of CSS stylesheets. It supersedes the lexical scanner and grammar defined in CSS 2.1.
This section is not normative.
A CSS document is a series of qualified rules, which are usually style rules that apply CSS properties to elements, and at-rules, which define special processing rules or values for the CSS document.
A qualified rule starts with a prelude then has a {}-wrapped block containing a sequence of declarations. The meaning of the prelude varies based on the context that the rule appears in - for style rules, it's a selector which specifies what elements the declarations will apply to. Each declaration has a name, followed by a colon and the declaration value, and finished with a semicolon.
A typical rule might look something like this:
p > a {
color: blue;
text-decoration: underline;
}
In the above rule, "p > a" is the selector, which, if the
source document is HTML, selects any <a> elements that
are children of a <p> element.
"color: blue;" is a declaration specifying that, for the
elements that match the selector, their ‘color’ property should have the value ‘blue’. Similiarly, their ‘text-decoration’ property should have the value
‘underline’.
At-rules are all different, but they have a basic structure in common. They start with an "@" character followed by their name. Some at-rules are simple statements, with their name followed by more CSS values to specify their behavior, and finally ended by a semicolon. Others are blocks; they can have CSS values following their name, but they end with a {}-wrapped block, similar to a rule. Even the contents of these blocks are specific to the given at-rule: sometimes they contain a sequence of declarations, like a rule; other times, they may contain additional blocks, or at-rules, or other structures altogether.
Here are several examples of at-rules that illustrate the varied syntax they may contain.
@import "my-styles.css";
The ‘@import’ at-rule is a simple statement. After its name,
it takes a single string or ‘url()’ function
to indicate the stylesheet that it should import.
@page :left {
margin-left: 4cm;
margin-right: 3cm;
}
The ‘@page’ at-rule consists of an optional page selector
(the ":left" pseudoclass), followed by a block of properties that apply
to the page when printed. In this way, it's very similar to a normal
style rule, except that its properties don't apply to any
"element", but rather the page itself.
@media print {
body { font-size: 10pt }
}
The ‘@media’ at-rule begins with a media type and a list of
optional media queries. Its block contains entire rules, which are only
applied when the ‘@media’s conditions are
fulfilled.
Property names and at-rule names are always idents, which have to start with a letter or a hyphen followed by a letter, and then can contain letters, numbers, hyphens, or underscores. You can include any character at all, even ones that CSS uses in its syntax, by escaping it with a backslash (\) or by using a hexadecimal escape.
The syntax of selectors is defined in the Selectors spec. Similarly, the syntax of the wide variety of CSS values is defined in the Values & Units spec. The special syntaxes of individual at-rules can be found in the specs that define them.
This section is not normative.
When errors occur in CSS, the parser attempts to recover gracefully, throwing away only the minimum amount of content before returning to parsing as normal. This is because errors aren't always mistakes - new syntax looks like an error to an old parser, and it's useful to be able to add new syntax to the language without worrying about stylesheets that include it being completely broken in older UAs.
The precise error-recovery behavior is detailed in the parser itself, but it's simple enough that a short description is fairly accurate:
User agents must use the parsing rules described in this specification to generate the CSSOM trees from text/css resources. Together, these rules define what is referred to as the CSS parser.
This specification defines the parsing rules for CSS documents, whether they are syntactically correct or not. Certain points in the parsing algorithm are said to be a parse errors. The error handling for parse errors is well-defined: user agents must either act as described below when encountering such problems, or must abort processing at the first error that they encounter for which they do not wish to apply the rules described below.
Conformance checkers must report at least one parse error condition to the user if one or more parse error conditions exist in the document and must not report parse error conditions if none exist in the document. Conformance checkers may report more than one parse error condition if more than one parse error condition exists in the document. Conformance checkers are not required to recover from parse errors, but if they do, they must recover in the same way as user agents.
The input to the CSS parsing process consists of a stream of Unicode code points, which is passed through a tokenization stage followed by a tree construction stage. The output is a CSSStyleSheet object.
Implementations that do not support scripting do not have to actually create a CSSOM CSSStyleSheet object, but the CSSOM tree in such cases is still used as the model for the rest of the specification.
The stream of Unicode code points that comprises the input to the tokenization stage will be initially seen by the user agent as a stream of bytes (typically coming over the network or from the local file system). The bytes encode the actual characters according to a particular character encoding, which the user agent must use to decode the bytes into characters.
To decode the stream of bytes into a stream of characters, UAs must follow these steps.
The algorithms to get an encoding and decode are defined in the Encoding Standard.
First, determine the fallback encoding:
40 63 68 61 72 73 65 74 20 22 (not 22)* 22 3Bthen get an encoding for the sequence of
(not 22)* bytes, decoded per
windows-1252.
Note: Anything ASCII-compatible will do, so using
windows-1252 is fine.
Note: The byte sequence above, when decoded as ASCII, is
the string "@charset "…";", where the "…" is the
sequence of bytes corresponding to the encoding's name.
If the return value was utf-16 or utf-16be,
use utf-8 as the fallback encoding; if it was anything else
except failure, use the return value as the fallback encoding.
This mimics HTML <meta> behavior.
charset attribute on the
<link> element or <?xml-stylesheet?>
processing instruction that caused the style sheet to be included, if
any. If that does not return failure, use the return value as the
fallback encoding.
utf-8 as the fallback encoding.
Then, decode the byte stream using the fallback encoding.
Note: the decode algorithm lets the byte order mark (BOM) take precedence, hence the usage of the term "fallback" above.
Anne says that steps 3/4 should be an input to this algorithm from the specs that define importing stylesheet, to make the algorithm as a whole cleaner. Perhaps abstract it into the concept of an "environment charset" or something?
Should we only take the charset from the referring document if it's same-origin?
The input stream consists of the characters pushed into it as the input byte stream is decoded.
Before sending the input stream to the tokenizer, implementations must make the following character substitutions:
Implementations must act as if they used the following state machine to tokenize CSS. The state machine must start in the data state. Most states consume a single character, which may have various side-effects, and either switches the state machine to a new state to reconsume the same character, or switches it to a new state to consume the next character, or stays in the same state to consume the next character. Some states have more complicated behavior and can consume several characters before switching to another state.
The output of the tokenization step is a series of zero or more of the following tokens: ident, function, at-keyword, hash, string, bad-string, url, bad-url, delim, number, percentage, dimension, unicode-range, include-match, dash-match, prefix-match, suffix-match, substring-match, column, whitespace, cdo, cdc, colon, semicolon, comma, [, ], (, ), {, and }.
Ident, function, at-keyword, hash, string, and url tokens have a value composed of zero or more characters. Additionally, hash tokens have a type flag set to either "id" or "unrestricted". The type flag defaults to "unrestricted" if not otherwise set. Delim tokens have a value composed of a single character. Number, percentage, and dimension tokens have a representation composed of 1 or more character, and a numeric value. Number and dimension tokens additionally have a type flag set to either "integer" or "number". The type flag defaults to "integer" if not otherwise set. Dimension tokens additionally have a unit composed of one or more characters. Unicode-range tokens have a range of characters.
The type flag of hash tokens is used in the Selectors syntax [SELECT]. Only hash tokens with the "id" type are valid ID selectors.
The tokenizer state machine consists of the states defined in the following subsections.
This section is non-normative.
This section presents an informative view of the tokenizer, in the form of railroad diagrams. Railroad diagrams are more compact than a state-machine, but often easier to read than a regular expression.
These diagrams are informative and incomplete; they describe the grammar of "correct" tokens, but do not describe error-handling at all. They are provided solely to make it easier to get an intuitive grasp of the syntax of each token.
Diagrams with names in all uppercase represent tokens. The rest are productions referred to by other diagrams.
This section defines several terms used during the tokenization phase.
Consume the next input character.
Otherwise, emit a delim token with its value set to the current input character. Remain in this state.
Otherwise, emit a delim token with its value set to the current input character. Remain in this state.
Otherwise, emit a delim token with its value set to the current input character. Remain in this state.
If the input stream would start a number, reconsume the current input character, then consume a numeric token and return it.
Otherwise, emit a delim token with its value set to the current input character. Remain in this state.
If the input stream would start a number, reconsume the current input character, then consume a numeric token and return it.
Otherwise, if the input stream starts with an identifier, switch to the ident state. Reconsume the current input character.
Otherwise, if the next 2 input characters are U+002D HYPHEN-MINUS U+003E GREATER-THAN SIGN (->), consume them, emit a CDC token, and remain in this state.
Otherwise, emit a delim token with its value set to the current input character. Remain in this state.
If the input stream would start a number, reconsume the current input character, then consume a numeric token and return it.
Otherwise, emit a delim token with its value set to the current input character. Remain in this state.
Otherwise, emit a delim token with its value set to U+002F SOLIDUS (/). Remain in this state.
Otherwise, emit a delim token with its value set to U+003C LESS-THAN SIGN (<). Remain in this state.
Otherwise, emit a delim token with its value set to the current input character. Remain in this state.
Otherwise, this is a parse error. Emit a delim token with its value set to the current input character. Remain in this state.
Otherwise, emit a delim token with its value set to the current input character. Remain in this state.
Otherwise, switch to the ident state. Reconsume the current input character.
Otherwise, if the next input character is U+0073 VERTICAL LINE (|), consume it and emit a column token. Remain in this state.
Otherwise, emit a delim token with its value set to the current input character. Remain in this state.
Otherwise, emit a delim token with its value set to the current input character. Remain in this state.
When this state is first entered, create a string token with its value initially set to the empty string.
Consume the next input character.
Otherwise, if the next input character is a newline, consume it. Remain in this state.
Otherwise, this is a parse error. Emit a bad-string token, then switch to the data state.
When this state is first entered, create a string token with its value initially set to the empty string.
Consume the next input character.
Otherwise, if the next input character is a newline, consume it. Remain in this state.
Otherwise, this is a parse error. Emit a bad-string token, then switch to the data state.
Consume the next input character.
Otherwise, do nothing and remain in this state.
Create a hash token. If the next three input characters would start an identifier, set the hash token's type flag to "id". Otherwise, set its type flag to "unrestricted".
Consume a sequence of name characters. Set the hash token's value to the returned sequence of characters.
Emit the hash token. Switch to the data state.
If this state emits a hash token whose value is the empty string, it's a spec or implementation error. The data validation performed in the data state should have guaranteed a non-empty value.
Consume a sequence of name characters. Create an at-keyword token and set its value to the returned sequence of characters. Emit the at-keyword token. Switch to the data state.
If this state emits an at-keyword token whose value is the empty string, it's a spec or implementation error. The data validation performed in the data state should have guaranteed a non-empty value.
Consume a sequence of name characters. Create an ident token and set its value to the returned sequence of characters.
If the next input character is not U+0028 LEFT PARENTHESIS ((), emit the ident token and switch to the data state.
Consume the LEFT PARENTHESIS. If the identifier token's value is an ASCII case-insensitive match for "url", switch to the url state.
Otherwise, convert the identifier token to a function token, preserving its value. Emit the function token. Switch to the data state.
Consume the next input character.
Consume the next input character.
Otherwise, if the next input character is a newline, consume it and remain in this state.
Otherwise, consume an escaped character. Append the returned character to the url token's value. Remain in this state.
Consume the next input character.
Otherwise, if the next input character is a newline, consume it and remain in this state.
Otherwise, consume an escaped character. Append the returned character to the url token's value. Remain in this state.
Consume the next input character.
Otherwise, consume an escaped character. Append the returned character to the url token's value. Remain in this state.
Consume the next input character.
Consume the next input character.
Otherwise, consume an escaped character. Remain in this state.
Create a new unicode-range token with an empty range.
Consume as many hex digits as possible, but no more than 6. If less than 6 hex digits were consumed, consume as many U+003F QUESTION MARK (?) characters as possible, but no more than enough to make the total of hex digits and U+003F QUESTION MARK (?) characters equal to 6.
If any U+003F QUESTION MARK (?) characters were consumed, first interpret the consumed characters as a hexadecimal number, with the U+003F QUESTION MARK (?) characters replaced by U+0030 DIGIT ZERO (0) characters. This is the start of the range. Then interpret the consumed characters as a hexadecimal number again, with the U+003F QUESTION MARK (?) character replaced by U+0046 LATIN CAPITAL LETTER F (F) characters. This is the end of the range. Set the unicode-range token's range, then emit it. Switch to the data state.
Otherwise, interpret the digits as a hexadecimal number. This is the start of the range.
Consume the next input character.
Otherwise, set the unicode-range token's range and emit it. Switch to the data state. Reconsume the current input character.
This section defines a number of algorithmic subroutines used by the tokenizer.
This algorithm assumes that the U+005C REVERSE SOLIDUS (\) has already been consumed and that the next input character has already been verified to not be a newline or EOF. It will return a character.
Consume the next input character.
This algorithm can be called explicitly with two characters, or can be called with the input stream itself. In the latter case, the two characters in question are the current input character and the next input character, in that order. This algorithm does not consume any characters.
If the first character is not U+005D REVERSE SOLIDUS (\), return false.
Otherwise, if the second character is a newline or EOF character, return false.
Otherwise, return true.
This algorithm can be called explicitly with three characters, or can be called with the input stream itself. In the latter case, the three characters in question are the current input character and the next two input characters, in that order. This algorithm does not consume any characters.
Look at the first character:
This algorithm consumes a sequence of name characters and escape characters, and returns the sequence after decoding escapes.
Initialize result to an empty sequence of characters.
Repeat until directed to stop:
Return result.
This algorithm can be called explicitly with three characters, or can be called with the input stream itself. In the latter case, the three characters in question are the current input character and the next two input characters, in that order. This algorithm does not consume any characters.
Look at the first character:
Otherwise, if the second character is a U+002E FULL STOP (.) and the third character is a digit, return true.
Otherwise, return false.
This algorithm is only invoked when the next three input characters would start a number. It returns one token, which may be a number, a dimension, or a percentage.
Create a number token with its representation initially set to the empty string and its type flag initially set to "integer".
If the next input character is U+002B PLUS SIGN (+) or U+002D HYPHEN-MINUS (-), consume it and append it to the number token's representation.
If the next input character is now U+002E FULL STOP (.), consume it, append it to the number token's representation, and set the number token's type flag to "number".
At this point, if the next input character is not a digit, that indicates an error either in the spec or the implementation.
While the next input character is a digit, consume it and append it to the number token's representation.
If the number token's type flag is "integer", and the next 2 input characters are now U+002E FULL STOP (.) followed by a digit, then:
If the next input character is now U+0045 LATIN CAPITAL LETTER E (E) or U+0065 LATIN SMALL LETTER E (e), and the second next input character is a digit, then:
Set the number token's value from its representation.
At this point the number token is fully constructed.
If the next 3 input characters would start an identifier, then:
Otherwise, if the next input character is U+0025 PERCENT SIGN (%), consume it. Convert the number token into a percentage token, preserving its value and representation. Return the percentage token.
Otherwise, return the number token.
Divide the number token's representation into seven components, in order from left to right:
At least one of the integer part and fractional part will be nonempty. If the number token's type flag is "integer", the fractional part and exponent will be empty.
Set the number token's value to s × (i + f ·10−d) × 10te. (Recall that 100 = 1.) This calculation is to be performed as if all variables were mathematical real numbers (i.e. with infinite precision and range). When the result is mathematically an integer, it must be represented exactly, within at least the range representable by a 32-bit twos-complement signed integer. Otherwise, the result must be represented with at least the range and precision of an IEEE 754 single-precision floating-point number.
Defer to Values and Units on numeric accuracy?
This section describes how to set a unicode-range token's range so that the range it describes is within the supported range of unicode characters.
It assumes that the start of the range has been defined, the end of the range might be defined, and both are non-negative integers.
If the start of the range is greater than the maximum allowed codepoint, the unicode-range token's range is empty.
If the end of the range is defined, and it is less than the start of the range, the unicode-range token's range is empty.
If the end of the range is not defined, the unicode-range token's range is the single character whose codepoint is the start of the range.
Otherwise, if the end of the range is greater than the maximum allowed codepoint, change it to the maximum allowed codepoint. The unicode-range token's range is all characters between the character whose codepoint is the start of the range and the character whose codepoint is the end of the range.
This section is non-normative.
Note that the point of this spec is to match reality; changes from CSS2.1's tokenizer are nearly always because the tokenizer specified something that doesn't match actual browser behavior, or left something unspecified. If some detail doesn't match browsers, please let me know as it's almost certainly unintentional.
The input to the parsing stage is a stream or list of tokens from the tokenization stage. The output depends on how the parser is invoked, as defined by the entry points listed later in this section. The parser output can consist of at-rules, qualified rules, and/or declarations.
The parser's output is constructed according to the fundamental syntax of CSS, without regards for the validity of any specific item. Implementations may check the validity of items as they are returned by the various parser algorithms and treat the algorithm as returning nothing if the item was invalid according to the implementation's own grammar knowledge, or may construct a full tree as specified and "clean up" afterwards by removing any invalid items.
The items that can appear in the tree are a mixture of basic tokens and new objects:
This specification places no limits on what an at-rule's value may contain. Individual at-rules must define whether they accept a value, and if so, how to parse it (preferably using one of the parser algorithms or entry points defined in this specification).
Most qualified rules will be style rules, where the prelude is a selector.
Should we go ahead and generalize the important flag to be a list of bang values? Suggested by Zack Weinburg.
The non-preserved tokens listed above are always consumed into higher-level objects, either functions or simple blocks, and so never appear in any parser output themselves.
This section is non-normative.
This section presents an informative view of the parser, in the form of railroad diagrams. Railroad diagrams are more compact than a state-machine, but often easier to read than a regular expression.
These diagrams are informative and incomplete; they describe the grammar of "correct" stylesheets, but do not describe error-handling at all. They are provided solely to make it easier to get an intuitive grasp of the syntax.
The algorithms defined in this specification can be invoked in multiple ways to convert a stream of text into various CSS concepts.
All of the algorithms defined in this section begin in the parser. It is assumed that the input preprocessing and tokenization steps have already been completed, resulting in a stream of tokens.
Other specs can define additional entry points for their own purposes.
The following notes should probably be translated into normative text in the relevant specs, hooking this spec's terms:
CSSStyleSheet#insertRule method, and similar
functions which might exist, which parse text into a single rule.
style
attribute, which parses text into the contents of a single style rule.
Are there any other things somewhere where some tech (that isn't straight CSS itself) needs to parse some text into CSS?
All of the algorithms defined in this spec may be called with either a list of tokens or of component values. Either way produces an identical result.
To parse a stylesheet from a stream of tokens:
To parse a rule from a stream of tokens:
Otherwise, if the current input token is an at-keyword token, consume an at-rule.
Otherwise, consume a qualified rule. If nothing was returned, return a syntax error.
To parse a list of declarations:
To parse a list of component values:
To parse a comma-separated list of component values:
The following algorithms comprise the parser. They are called by the parser entry points above.
These algorithms may be called with a list of either tokens or of component values. (The difference being that some tokens are replaced by functions and simple blocks in a list of component values.) Similar to how the input stream returned EOF characters to represent when it was empty during the tokenization stage, the lists in this stage must return an EOF token when the next token is requested but they are empty.
An algorithm may be invoked with a specific list, in which case it consumes only that list (and when that list is exhausted, it begins returning EOF tokens). Otherwise, it is implicitly invoked with the same list as the invoking algorithm.
Create an initially empty list of rules.
Repeatedly consume the next input token:
Otherwise, reconsume the current input token. Consume a qualified rule. If anything is returned, append it to the list of rules.
Create a new at-rule with its name set to the value of the current input token, its prelude initially set to an empty list, and its value initially set to nothing.
Repeatedly consume the next input token:
Create a new qualified rule with its prelude initially set to an empty list, and its value initially set to nothing.
Repeatedly consume the next input token:
Create an initially empty list of declarations.
Repeatedly consume the next input token:
Create a new declaration with its name set to the value of the current input token.
Repeatedly consume whitespace tokens until a non-whitespace token is reached. If this token is anything but a colon token, this is a parse error. Return nothing.
Otherwise, repeatedly consume a component value from the next input token until an EOF token is reached, appending all of the returned values up to that point to the declaration's value.
If the last two non-whitespace tokens in the declaration's value are a delim token with the value "!" followed by an ident token with a value that is an ASCII case-insensitive match for "important", remove them from the declaration's value and set the declaration's important flag to true.
Return the declaration.
This section describes how to consume a component value.
If the current input token is a {, [, or ( token, consume a simple block and return it.
Otherwise, if the current input token is a function token, consume a function and return it.
Otherwise, return the current input token.
This section describes how to consume a simple block.
The ending token is the mirror variant of the current input token. (E.g. if it was called with [, the ending token is ].)
Create a simple block with its associated token set to the current input token.
Repeatedly consume the next input token and process it as follows:
This section describes how to consume a function.
Create a function with a name equal to the value of the current input token, and with a value which is initially an empty list.
Repeatedly consume the next input token and process it as follows:
This section is non-normative.
Note that the point of this spec is to match reality; changes from CSS2.1's Core Grammar are nearly always because the Core Grammar specified something that doesn't match actual browser behavior, or left something unspecified. If some detail doesn't match browsers, please let me know as it's almost certainly unintentional.
Several things in CSS, such as the ‘:nth-child()’ pseudoclass, need to indicate indexes in
a list. The An+B microsyntax is useful for
this, allowing an author to easily indicate single elements or all
elements at regularly-spaced intervals in a list.
The An+B notation defines an integer step (A) and offset (B), and represents the An+Bth elements in a list, for every positive integer or zero value of n, with the first element in the list having index 1 (not 0).
For values of A and B greater than 0, this effectively divides the list into groups of A elements (the last group taking the remainder), and selecting the Bth element of each group.
The An+B notation also accepts the
‘even’ and ‘odd’
keywords, which have the same meaning as ‘2n’
and ‘2n+1’, respectively.
Examples:
2n+0 /* represents all of the even elements in the list */ even /* same */ 4n+1 /* represents the 1st, 5th, 9th, 13th, etc. elements in the list */
The values of A and B can be negative, but only the positive results of An+B, for n ≥ 0, are used.
Example:
-n+6 /* represents the first 6 elements of the list */
If both A and B are 0, the pseudo-class represents no element in the list.
This section is non-normative.
When A is 0, the An part may be
omitted (unless the B part is already
omitted). When An is not included and B is non-negative, the ‘+’ sign before B (when
allowed) may also be omitted. In this case the syntax simplifies to just
B.
Examples:
0n+5 /* represents the 5th element in the list */ 5 /* same */
When A is 1 or -1, the 1 may
be omitted from the rule.
Examples:
The following notations are therefore equivalent:
1n+0 /* represents all elements in the list */ n+0 /* same */ n /* same */
If B is 0, then every Ath element is picked. In such a case, the +B (or -B) part may be omitted unless the A part is already omitted.
Examples:
2n+0 /* represents every even element in the list */ 2n /* same */
Whitespace is permitted on either side of the ‘+’ or ‘-’ that separates the
An and B parts when both are
present.
Valid Examples with white space:
3n + 1 +3n - 2 -n+ 6 +6
Invalid Examples with white space:
3 n + 2n + 2
<an+b> typeThe An+B notation was originally defined using a slightly different tokenizer than the rest of CSS, resulting in a somewhat odd definition when expressed in terms of CSS tokens. This section describes how to recognize the An+B notation in terms of CSS tokens (thus defining the <an+b> type for CSS grammar purposes), and how to interpret the CSS tokens to obtain values for A and B.
The <an+b> type is defined as:
<an+b> = odd | even | <integer> | <n-dimension> | '+'? n | -n | <ndashdigit-dimension> | '+'? <ndashdigit-ident> | <dashndashdigit-ident> | <n-dimension> <signed-integer> | '+'? n <signed-integer> | -n <signed-integer> | <n-dimension> ['+' | '-'] <signless-integer> '+'? n ['+' | '-'] <signless-integer> | -n ['+' | '-'] <signless-integer>
where:
<n-dimension> is a
DIMENSION token with its type flag set to "integer", and a unit that is
an ASCII case-insensitive
match for "n"
<ndashdigit-dimension> is
a DIMENSION token with its type flag set to "integer", and a unit that is
an ASCII case-insensitive
match for "n-*", where "*" is a series of one or more digits
<ndashdigit-ident> is
an IDENT token whose representation is an ASCII case-insensitive match
for "n-*", where "*" is a series of one or more digits
<dashndashdigit-ident> is
an IDENT token whose representation is an ASCII case-insensitive match
for "-n-*", where "*" is a series of one or more digits
<integer> is a NUMBER token
with its type flag set to "integer"
<signed-integer> is a
NUMBER token with its type flag set to "integer", and whose
representation starts with "+" or "-"
<signless-integer> is
a NUMBER token with its type flag set to "integer", and whose
representation start with a digit
The clauses of the production are interpreted as follows:
odd’
even’
<integer>
<n-dimension>
'+'? n
-n
<ndashdigit-dimension>
'+'? <ndashdigit-ident>
<dashndashdigit-ident>
<n-dimension> <signed-integer>
'+'? n <signed-integer>
-n <signed-integer>
<n-dimension> ['+' |
'-'] <signless-integer>
'+'? n ['+' | '-'] <signless-integer>
-n ['+' | '-'] <signless-integer>
-’ was provided between the two, B is instead the negation of the integer.
This specification does not define how to serialize CSS in general, leaving that task to the CSSOM and individual feature specifications. However, there is one important facet that must be specified here regarding comments, to ensure accurate "round-tripping" of data from text to CSS objects and back.
The tokenizer described in this specification does not produce tokens for comments, or otherwise preserve them in any way. Implementations may preserve the contents of comments and their location in the token stream. If they do, this preserved information must have no effect on the parsing step, but must be serialized in its position as "/*" followed by its contents followed by "*/".
If the implementation does not preserve comments, it must insert the text "/**/" between the serialization of adjacent tokens when the two tokens are of the following pairs:
The preceding pairs of tokens can only be adjacent due to comments in the original text, so the above rule reinserts the minimum number of comments into the serialized text to ensure an accurate round-trip. (Roughly. The delim token rules are slightly too powerful, for simplicity.)
Define this.
Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.
All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]
Examples in this specification are introduced with the words “for
example” or are set apart from the normative text with
class="example", like this:
This is an example of an informative example.
Informative notes begin with the word “Note” and are set apart from
the normative text with class="note", like this:
Note, this is an informative note.
Conformance to CSS Syntax Module Level 3 is defined for three conformance classes:
A style sheet is conformant to CSS Syntax Module Level 3 if it is syntactically valid according to this module.
A renderer is conformant to CSS Syntax Module Level 3 if it parses a stylesheet according to this module.
An authoring tool is conformant to CSS Syntax Module Level 3 if it writes style sheets that are syntactically valid according to this module.
Thanks for feedback and contributions from David Baron, 呂康豪 (Kang-Hao Lu), and Simon Sapin.
<dashndashdigit-ident>, 6.2.
<integer>, 6.2.
<ndashdigit-dimension>, 6.2.
<ndashdigit-ident>, 6.2.
<n-dimension>, 6.2.
<signed-integer>, 6.2.
<signless-integer>, 6.2.
| Property | Values | Initial | Applies to | Inh. | Percentages | Media |
|---|