W3C

CSS Syntax Module Level 3

Editor's Draft

This version:
http://dev.w3.org/csswg/css3-syntax/
Editor's draft:
http://dev.w3.org/csswg/css3-syntax/
Previous version:
http://www.w3.org/TR/2003/WD-css3-syntax-20030813/
Issue Tracking:
W3C Bugzilla
Feedback:
www-style@w3.org with subject line “[css-syntax] … message topic …” (archives)
Editors:
(Google, Inc.),

Abstract

CSS is a language for describing the rendering of structured documents (such as HTML and XML) on screen, on paper, in speech, etc. This module describes, in general terms, the basic structure and syntax of CSS stylesheets. It defines, in detail, the syntax and parsing of CSS - how to turn a stream of bytes into a meaningful stylesheet.

Status of this document

This is a public copy of the editors' draft. It is provided for discussion only and may change at any moment. Its publication here does not imply endorsement of its contents by W3C. Don't cite this document other than as work in progress.

The (archived) public mailing list www-style@w3.org (see instructions) is preferred for discussion of this specification. When sending e-mail, please put the text “css3-syntax” in the subject, preferably like this: “[css3-syntax] …summary of comment…

This document was produced by the CSS Working Group (part of the Style Activity).

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

The following features are at risk: …

Table of contents

  • 5. Parsing
  • 6. The An+B microsyntax
  • 7. Serialization
  • 8. Conformance
  • Acknowledgments
  • References
  • Index
  • Property index
  • 1. Introduction

    This section is not normative.

    This module defines the abstract syntax and parsing of CSS stylesheets and other things which use CSS syntax (such as the HTML style attribute).

    It defines algorithms for converting a stream of codepoints (in other words, text) into a stream of CSS tokens, and then further into CSS objects such as stylesheets, rules, and declarations.

    1.1. Module interactions

    This module defines the syntax and parsing of CSS stylesheets. It supersedes the lexical scanner and grammar defined in CSS 2.1.

    2. Description of CSS's Syntax

    This section is not normative.

    A CSS document is a series of qualified rules, which are usually style rules that apply CSS properties to elements, and at-rules, which define special processing rules or values for the CSS document.

    A qualified rule starts with a prelude then has a {}-wrapped block containing a sequence of declarations. The meaning of the prelude varies based on the context that the rule appears in - for style rules, it's a selector which specifies what elements the declarations will apply to. Each declaration has a name, followed by a colon and the declaration value, and finished with a semicolon.

    A typical rule might look something like this:

    p > a {
    	color: blue;
    	text-decoration: underline;
    }

    In the above rule, "p > a" is the selector, which, if the source document is HTML, selects any <a> elements that are children of a <p> element.

    "color: blue;" is a declaration specifying that, for the elements that match the selector, their ‘color’ property should have the value ‘blue’. Similiarly, their ‘text-decoration’ property should have the value ‘underline’.

    At-rules are all different, but they have a basic structure in common. They start with an "@" character followed by their name. Some at-rules are simple statements, with their name followed by more CSS values to specify their behavior, and finally ended by a semicolon. Others are blocks; they can have CSS values following their name, but they end with a {}-wrapped block, similar to a rule. Even the contents of these blocks are specific to the given at-rule: sometimes they contain a sequence of declarations, like a rule; other times, they may contain additional blocks, or at-rules, or other structures altogether.

    Here are several examples of at-rules that illustrate the varied syntax they may contain.

    @import "my-styles.css";

    The ‘@importat-rule is a simple statement. After its name, it takes a single string or ‘url()’ function to indicate the stylesheet that it should import.

    @page :left {
    	margin-left: 4cm;
    	margin-right: 3cm;
    }

    The ‘@pageat-rule consists of an optional page selector (the ":left" pseudoclass), followed by a block of properties that apply to the page when printed. In this way, it's very similar to a normal style rule, except that its properties don't apply to any "element", but rather the page itself.

    @media print {
    	body { font-size: 10pt }
    }

    The ‘@mediaat-rule begins with a media type and a list of optional media queries. Its block contains entire rules, which are only applied when the ‘@media’s conditions are fulfilled.

    Property names and at-rule names are always idents, which have to start with a letter or a hyphen followed by a letter, and then can contain letters, numbers, hyphens, or underscores. You can include any character at all, even ones that CSS uses in its syntax, by escaping it with a backslash (\) or by using a hexadecimal escape.

    The syntax of selectors is defined in the Selectors spec. Similarly, the syntax of the wide variety of CSS values is defined in the Values & Units spec. The special syntaxes of individual at-rules can be found in the specs that define them.

    2.1. Error Handling

    This section is not normative.

    When errors occur in CSS, the parser attempts to recover gracefully, throwing away only the minimum amount of content before returning to parsing as normal. This is because errors aren't always mistakes - new syntax looks like an error to an old parser, and it's useful to be able to add new syntax to the language without worrying about stylesheets that include it being completely broken in older UAs.

    The precise error-recovery behavior is detailed in the parser itself, but it's simple enough that a short description is fairly accurate:

    3. Tokenizing and Parsing CSS

    User agents must use the parsing rules described in this specification to generate the CSSOM trees from text/css resources. Together, these rules define what is referred to as the CSS parser.

    This specification defines the parsing rules for CSS documents, whether they are syntactically correct or not. Certain points in the parsing algorithm are said to be a parse errors. The error handling for parse errors is well-defined: user agents must either act as described below when encountering such problems, or must abort processing at the first error that they encounter for which they do not wish to apply the rules described below.

    Conformance checkers must report at least one parse error condition to the user if one or more parse error conditions exist in the document and must not report parse error conditions if none exist in the document. Conformance checkers may report more than one parse error condition if more than one parse error condition exists in the document. Conformance checkers are not required to recover from parse errors, but if they do, they must recover in the same way as user agents.

    3.1. Overview of the Parsing Model

    The input to the CSS parsing process consists of a stream of Unicode code points, which is passed through a tokenization stage followed by a tree construction stage. The output is a CSSStyleSheet object.

    Implementations that do not support scripting do not have to actually create a CSSOM CSSStyleSheet object, but the CSSOM tree in such cases is still used as the model for the rest of the specification.

    3.2. The input byte stream

    The stream of Unicode code points that comprises the input to the tokenization stage will be initially seen by the user agent as a stream of bytes (typically coming over the network or from the local file system). The bytes encode the actual characters according to a particular character encoding, which the user agent must use to decode the bytes into characters.

    To decode the stream of bytes into a stream of characters, UAs must follow these steps.

    The algorithms to get an encoding and decode are defined in the Encoding Standard.

    First, determine the fallback encoding:

    1. If HTTP or equivalent protocol defines an encoding (e.g. via the charset parameter of the Content-Type header), get an encoding for the specified value. If that does not return failure, use the return value as the fallback encoding.
    2. Otherwise, check the byte stream. If the first several bytes match the hex sequence
      40 63 68 61 72 73 65 74 20 22 (not 22)* 22 3B
      then get an encoding for the sequence of (not 22)* bytes, decoded per windows-1252.

      Note: Anything ASCII-compatible will do, so using windows-1252 is fine.

      Note: The byte sequence above, when decoded as ASCII, is the string "@charset "…";", where the "…" is the sequence of bytes corresponding to the encoding's name.

      If the return value was utf-16 or utf-16be, use utf-8 as the fallback encoding; if it was anything else except failure, use the return value as the fallback encoding.

      This mimics HTML <meta> behavior.

    3. Otherwise, get an encoding for the value of the charset attribute on the <link> element or <?xml-stylesheet?> processing instruction that caused the style sheet to be included, if any. If that does not return failure, use the return value as the fallback encoding.
    4. Otherwise, if the referring style sheet or document has an encoding, use that as the fallback encoding.
    5. Otherwise, use utf-8 as the fallback encoding.

    Then, decode the byte stream using the fallback encoding.

    Note: the decode algorithm lets the byte order mark (BOM) take precedence, hence the usage of the term "fallback" above.

    Anne says that steps 3/4 should be an input to this algorithm from the specs that define importing stylesheet, to make the algorithm as a whole cleaner. Perhaps abstract it into the concept of an "environment charset" or something?

    Should we only take the charset from the referring document if it's same-origin?

    3.2.1. Preprocessing the input stream

    The input stream consists of the characters pushed into it as the input byte stream is decoded.

    Before sending the input stream to the tokenizer, implementations must make the following character substitutions:

    4. Tokenization

    Implementations must act as if they used the following state machine to tokenize CSS. The state machine must start in the data state. Most states consume a single character, which may have various side-effects, and either switches the state machine to a new state to reconsume the same character, or switches it to a new state to consume the next character, or stays in the same state to consume the next character. Some states have more complicated behavior and can consume several characters before switching to another state.

    The output of the tokenization step is a series of zero or more of the following tokens: ident, function, at-keyword, hash, string, bad-string, url, bad-url, delim, number, percentage, dimension, unicode-range, include-match, dash-match, prefix-match, suffix-match, substring-match, column, whitespace, cdo, cdc, colon, semicolon, comma, [, ], (, ), {, and }.

    Ident, function, at-keyword, hash, string, and url tokens have a value composed of zero or more characters. Additionally, hash tokens have a type flag set to either "id" or "unrestricted". The type flag defaults to "unrestricted" if not otherwise set. Delim tokens have a value composed of a single character. Number, percentage, and dimension tokens have a representation composed of 1 or more character, and a numeric value. Number and dimension tokens additionally have a type flag set to either "integer" or "number". The type flag defaults to "integer" if not otherwise set. Dimension tokens additionally have a unit composed of one or more characters. Unicode-range tokens have a range of characters.

    The type flag of hash tokens is used in the Selectors syntax [SELECT]. Only hash tokens with the "id" type are valid ID selectors.

    The tokenizer state machine consists of the states defined in the following subsections.

    4.1. Token Railroad Diagrams

    This section is non-normative.

    This section presents an informative view of the tokenizer, in the form of railroad diagrams. Railroad diagrams are more compact than a state-machine, but often easier to read than a regular expression.

    These diagrams are informative and incomplete; they describe the grammar of "correct" tokens, but do not describe error-handling at all. They are provided solely to make it easier to get an intuitive grasp of the syntax of each token.

    Diagrams with names in all uppercase represent tokens. The rest are productions referred to by other diagrams.

    comment
    /*anything but * followed by /*/
    newline
    \n\r\n\r\f
    whitespace character
    space\tnewline
    escape
    \not newline or hex digithex digit1-6 timeswhitespace character
    WHITESPACE
    whitespace character
    IDENT
    -a-z A-Z _ or non-ASCIIescapea-z A-Z 0-9 _ - or non-ASCIIescape
    FUNCTION
    IDENT(
    AT-KEYWORD
    @IDENT
    HASH
    #a-z A-Z 0-9 _ - or non-ASCIIescape
    STRING
    "not " \ or newlineescape\newline"'not ' \ or newlineescape\newline'
    URL
    IDENT(url)(WHITESPACEurl-unquotedSTRINGWHITESPACE)
    url-unquoted
    not " ' ( ) \ whitespace or non-printableescape
    NUMBER
    +-digit.digitdigit.digiteE+-digit
    DIMENSION
    NUMBERIDENT
    PERCENTAGE
    NUMBER%
    UNICODE-RANGE
    Uu+hex digit1-6 timeshex digit1-5 times?1 to (6 - digits) timeshex digit1-6 times-hex digit1-6 times
    INCLUDE-MATCH
    ~=
    DASH-MATCH
    |=
    PREFIX-MATCH
    ^=
    SUFFIX-MATCH
    $=
    SUBSTRING-MATCH
    *=
    COLUMN
    ||
    CDO
    <!--
    CDC
    -->

    4.2. Definitions

    This section defines several terms used during the tokenization phase.

    next input character
    The first character in the input stream that has not yet been consumed.
    current input character
    The last character to have been consumed.
    reconsume the current input character
    Push the current input character back onto the front of the input stream, so that the next time you are instructed to consume the next input character, it will instead reconsume the current input character.
    EOF character
    A conceptual character representing the end of the input stream. Whenever the input stream is empty, the next input character is always an EOF character.
    digit
    A character between U+0030 DIGIT ZERO (0) and U+0039 DIGIT NINE (9).
    hex digit
    A digit, or a character between U+0041 LATIN CAPITAL LETTER A (A) and U+0046 LATIN CAPITAL LETTER F (F), or a character between U+0061 LATIN SMALL LETTER A (a) and U+0066 LATIN SMALL LETTER F (f).
    uppercase letter
    A character between U+0041 LATIN CAPITAL LETTER A (A) and U+005A LATIN CAPITAL LETTER Z (Z).
    lowercase letter
    A character between U+0061 LATIN SMALL LETTER A (a) and U+007A LATIN SMALL LETTER Z (z).
    letter
    An uppercase letter or a lowercase letter.
    non-ASCII character
    A character with a codepoint equal to or greater than U+0080 <control>.
    name-start character
    A letter, a non-ASCII character, or U+005F LOW LINE (_).
    name character
    A name-start character, A digit, or U+002D HYPHEN-MINUS (-).
    non-printable character
    A character between U+0000 NULL and U+0008 BACKSPACE or a character between U+000E SHIFT OUT and U+001F INFORMATION SEPARATOR ONE or a character between U+007F DELETE and U+009F APPLICATION PROGRAM COMMAND.
    newline
    U+000A LINE FEED. Note that U+000D CARRIAGE RETURN and U+000C FORM FEED are not included in this definition, as they are removed from the stream during preprocessing.
    whitespace
    A newline, U+0009 CHARACTER TABULATION, or U+0020 SPACE.
    maximum allowed codepoint
    The greatest codepoint defined by Unicode. This is currently U+10FFFF.

    4.3. Tokenizer State Machine

    4.3.1. Data state

    Consume the next input character.

    whitespace
    Consume as much whitespace as possible. Emit a whitespace token. Remain in this state.
    U+0022 QUOTATION MARK (")
    Switch to the double-quote-string state.
    U+0023 NUMBER SIGN (#)
    If the next three input characters would start an identifier or would start a number, switch to the hash state.

    Otherwise, emit a delim token with its value set to the current input character. Remain in this state.

    U+0024 DOLLAR SIGN ($)
    If the next input character is U+003D EQUALS SIGN (=), consume it and emit a suffix-match token. Remain in this state.

    Otherwise, emit a delim token with its value set to the current input character. Remain in this state.

    U+0027 APOSTROPHE (')
    Switch to the single-quote-string state.
    U+0028 LEFT PARENTHESIS (()
    Emit a ( token. Remain in this state.
    U+0029 RIGHT PARENTHESIS ())
    Emit a ) token. Remain in this state.
    U+002A ASTERISK (*)
    If the next input character is U+003D EQUALS SIGN (=), consume it and emit a substring-match token. Remain in this state.

    Otherwise, emit a delim token with its value set to the current input character. Remain in this state.

    U+002B PLUS SIGN (+)

    If the input stream would start a number, reconsume the current input character, then consume a numeric token and return it.

    Otherwise, emit a delim token with its value set to the current input character. Remain in this state.

    U+002C COMMA (,)
    Emit a comma token. Remain in this state.
    U+002D HYPHEN-MINUS (-)

    If the input stream would start a number, reconsume the current input character, then consume a numeric token and return it.

    Otherwise, if the input stream starts with an identifier, switch to the ident state. Reconsume the current input character.

    Otherwise, if the next 2 input characters are U+002D HYPHEN-MINUS U+003E GREATER-THAN SIGN (->), consume them, emit a CDC token, and remain in this state.

    Otherwise, emit a delim token with its value set to the current input character. Remain in this state.

    U+002E FULL STOP (.)

    If the input stream would start a number, reconsume the current input character, then consume a numeric token and return it.

    Otherwise, emit a delim token with its value set to the current input character. Remain in this state.

    U+002F SOLIDUS (/)
    If the next input character is U+002A ASTERISK (*), consume it and switch to the comment state.

    Otherwise, emit a delim token with its value set to U+002F SOLIDUS (/). Remain in this state.

    U+003A COLON (:)
    Emit a colon token. Remain in this state.
    U+003B SEMICOLON (;)
    Emit a semicolon token. Remain in this state.
    U+003C LESS-THAN SIGN (<)
    If the next 3 input characters are U+0021 EXCLAMATION MARK U+002D HYPHEN-MINUS U+002D HYPHEN-MINUS (!--), consume them and emit a cdo token. Remain in this state.

    Otherwise, emit a delim token with its value set to U+003C LESS-THAN SIGN (<). Remain in this state.

    U+0040 COMMERCIAL AT (@)
    If the next 3 input characters would start an identifier, switch to the at-keyword state.

    Otherwise, emit a delim token with its value set to the current input character. Remain in this state.

    U+005B LEFT SQUARE BRACKET ([)
    Emit a [ token. Remain in this state.
    U+005C REVERSE SOLIDUS (\)
    If the input stream starts with a valid escape, switch to the ident state. Reconsume the current input character.

    Otherwise, this is a parse error. Emit a delim token with its value set to the current input character. Remain in this state.

    U+005D RIGHT SQUARE BRACKET (])
    Emit a ] token. Remain in this state.
    U+005E CIRCUMFLEX ACCENT (^)
    If the next input character is U+003D EQUALS SIGN (=), consume it and emit a prefix-match token. Remain in this state.

    Otherwise, emit a delim token with its value set to the current input character. Remain in this state.

    U+007B LEFT CURLY BRACKET ({)
    Emit a { token. Remain in this state.
    U+007D RIGHT CURLY BRACKET (})
    Emit a } token. Remain in this state.
    digit
    Reconsume the current input character, then consume a numeric token and return it. Remain in this state.
    U+0055 LATIN CAPITAL LETTER U (U)
    U+0075 LATIN SMALL LETTER U (u)
    If the next 2 input character are U+002B PLUS SIGN (+) followed by a hex digit or U+003F QUESTION MARK (?), consume the next input character. Note: don't consume both of them. Switch to the unicode-range state.

    Otherwise, switch to the ident state. Reconsume the current input character.

    name-start character
    Switch to the ident state. Reconsume the current input character.
    U+007C VERTICAL LINE (|)
    If the next input character is U+003D EQUALS SIGN (=), consume it and emit a dash-match token. Remain in this state.

    Otherwise, if the next input character is U+0073 VERTICAL LINE (|), consume it and emit a column token. Remain in this state.

    Otherwise, emit a delim token with its value set to the current input character. Remain in this state.

    U+007E TILDE (~)
    If the next input character is U+003D EQUALS SIGN (=), consume it and emit an include-match token. Remain in this state.

    Otherwise, emit a delim token with its value set to the current input character. Remain in this state.

    EOF
    End this algorithm.
    anything else
    Emit a delim token with its value set to the current input character. Remain in this state.

    4.3.2. Double-quote-string state

    When this state is first entered, create a string token with its value initially set to the empty string.

    Consume the next input character.

    U+0022 QUOTATION MARK (")
    EOF
    Emit the string token. Switch to the data state.
    newline
    This is a parse error. Emit a bad-string token. Switch to the data state. Reconsume the current input character.
    U+005C REVERSE SOLIDUS (\)
    If the current input stream starts with a valid escape, consume an escaped character and append the return value to the string token's value. Remain in this state.

    Otherwise, if the next input character is a newline, consume it. Remain in this state.

    Otherwise, this is a parse error. Emit a bad-string token, then switch to the data state.

    anything else
    Append the current input character to the string token's value. Remain in this state.

    4.3.3. Single-quote-string state

    When this state is first entered, create a string token with its value initially set to the empty string.

    Consume the next input character.

    U+0027 APOSTROPHE (')
    EOF
    Emit the string token. Switch to the data state.
    newline
    This is a parse error. Emit a bad-string token. Switch to the data state. Reconsume the current input character.
    U+005C REVERSE SOLIDUS (\)
    If the current input stream starts with a valid escape, consume an escaped character and append the return value to the string token's value. Remain in this state.

    Otherwise, if the next input character is a newline, consume it. Remain in this state.

    Otherwise, this is a parse error. Emit a bad-string token, then switch to the data state.

    anything else
    Append the current input character to the string token's value. Remain in this state.

    4.3.4. Comment state

    Consume the next input character.

    U+002A ASTERISK (*)
    If the next input character is U+002F SOLIDUS (/), consume it, and switch to the data state.

    Otherwise, do nothing and remain in this state.

    EOF
    This is a parse error. Switch to the data state. Reconsume the current input character.
    anything else
    Do nothing and remain in this state.

    4.3.5. Hash state

    Create a hash token. If the next three input characters would start an identifier, set the hash token's type flag to "id". Otherwise, set its type flag to "unrestricted".

    Consume a sequence of name characters. Set the hash token's value to the returned sequence of characters.

    Emit the hash token. Switch to the data state.

    If this state emits a hash token whose value is the empty string, it's a spec or implementation error. The data validation performed in the data state should have guaranteed a non-empty value.

    4.3.6. At-keyword state

    Consume a sequence of name characters. Create an at-keyword token and set its value to the returned sequence of characters. Emit the at-keyword token. Switch to the data state.

    If this state emits an at-keyword token whose value is the empty string, it's a spec or implementation error. The data validation performed in the data state should have guaranteed a non-empty value.

    4.3.7. Ident state

    Consume a sequence of name characters. Create an ident token and set its value to the returned sequence of characters.

    If the next input character is not U+0028 LEFT PARENTHESIS ((), emit the ident token and switch to the data state.

    Consume the LEFT PARENTHESIS. If the identifier token's value is an ASCII case-insensitive match for "url", switch to the url state.

    Otherwise, convert the identifier token to a function token, preserving its value. Emit the function token. Switch to the data state.

    4.3.13. URL state

    Consume the next input character.

    EOF
    This is a parse error. Emit a bad-url token. Switch to the data state.
    U+0022 QUOTATION MARK (")
    Create a url token with its value initially set to the empty string. Switch to the url-double-quote state.
    U+0027 APOSTROPHE (')
    Create a url token with its value initially set to the empty string. Switch to the url-single-quote state.
    U+0029 RIGHT PARENTHESIS ())
    Emit a url token with its value set to the empty string. Switch to the data state.
    whitespace
    Remain in this state.
    anything else
    Create a url token with its value initially set to the empty string. Switch to the url-unquoted state. Reconsume the current input character.

    4.3.14. URL-double-quote state

    Consume the next input character.

    EOF
    Emit the url token. Switch to the data state.
    U+0022 QUOTATION MARK (")
    Switch to the url-end state.
    newline
    This is a parse error. Switch to the bad-url state.
    U+005C REVERSE SOLIDUS (\)
    If the next input character is EOF, this is a parse error. Emit a bad-url token. Switch to the data state. Reconsume the current input character.

    Otherwise, if the next input character is a newline, consume it and remain in this state.

    Otherwise, consume an escaped character. Append the returned character to the url token's value. Remain in this state.

    anything else
    Append the current input character to the url token's value. Remain in this state.

    4.3.15. URL-single-quote state

    Consume the next input character.

    EOF
    Emit the url token. Switch to the data state.
    U+0027 APOSTROPHE (')
    Switch to the url-end state.
    newline
    This is a parse error. Switch to the bad-url state.
    U+005C REVERSE SOLIDUS (\)
    If the next input character is EOF, this is a parse error. Emit a bad-url token. Switch to the data state. Reconsume the current input character.

    Otherwise, if the next input character is a newline, consume it and remain in this state.

    Otherwise, consume an escaped character. Append the returned character to the url token's value. Remain in this state.

    anything else
    Append the current input character to the url token's value. Remain in this state.

    4.3.16. URL-unquoted state

    Consume the next input character.

    U+0029 RIGHT PARENTHESIS ())
    EOF
    Emit the url token. Switch to the data state.
    whitespace
    Switch to the url-end state.
    U+0022 QUOTATION MARK (")
    U+0027 APOSTROPHE (')
    U+0028 LEFT PARENTHESIS (()
    non-printable character
    This is a parse error. Switch to the bad-url state.
    U+005C REVERSE SOLIDUS
    If the next input character is a newline or EOF, this is a parse error. Switch to the bad-url state.

    Otherwise, consume an escaped character. Append the returned character to the url token's value. Remain in this state.

    anything else
    Append the current input character to the url token's value. Remain in this state.

    4.3.17. URL-end state

    Consume the next input character.

    U+0029 RIGHT PARENTHESIS ())
    EOF
    Emit the url token. Switch to the data state.
    whitespace
    Remain in this state.
    anything else
    This is a parse error. Switch to the bad-url state. Reconsume the current input character.

    4.3.18. Bad-URL state

    Consume the next input character.

    EOF
    This is a parse error. Emit a bad-url token. Switch to the data state.
    U+0029 RIGHT PARENTHESIS ())
    Emit a bad-url token. Switch to the data state.
    U+005C REVERSE SOLIDUS
    If the next input character is a newline or EOF, do nothing and remain in this state.

    Otherwise, consume an escaped character. Remain in this state.

    anything else
    Do nothing. Remain in this state.

    4.3.19. Unicode-range state

    Create a new unicode-range token with an empty range.

    Consume as many hex digits as possible, but no more than 6. If less than 6 hex digits were consumed, consume as many U+003F QUESTION MARK (?) characters as possible, but no more than enough to make the total of hex digits and U+003F QUESTION MARK (?) characters equal to 6.

    If any U+003F QUESTION MARK (?) characters were consumed, first interpret the consumed characters as a hexadecimal number, with the U+003F QUESTION MARK (?) characters replaced by U+0030 DIGIT ZERO (0) characters. This is the start of the range. Then interpret the consumed characters as a hexadecimal number again, with the U+003F QUESTION MARK (?) character replaced by U+0046 LATIN CAPITAL LETTER F (F) characters. This is the end of the range. Set the unicode-range token's range, then emit it. Switch to the data state.

    Otherwise, interpret the digits as a hexadecimal number. This is the start of the range.

    Consume the next input character.

    U+002D HYPHEN-MINUS (-)
    If the next input character is a hex digit, consume as many hex digits as possible, but no more than 6. Interpret the digits as a hexadecimal number. This is the end of the range. Set the unicode-range token's range, then emit it. Switch to the data state.

    Otherwise, set the unicode-range token's range and emit it. Switch to the data state. Reconsume the current input character.

    anything else
    Set the unicode-range token's range and emit it. Switch to the data state. Reconsume the current input character.

    4.4. Tokenizer Algorithms

    This section defines a number of algorithmic subroutines used by the tokenizer.

    4.4.1. Consume an escaped character

    This algorithm assumes that the U+005C REVERSE SOLIDUS (\) has already been consumed and that the next input character has already been verified to not be a newline or EOF. It will return a character.

    Consume the next input character.

    hex digit
    Consume as many hex digits as possible, but no more than 5. Note that this means 1-6 hex digits have been consumed in total. If the next input character is whitespace, consume it as well. Interpret the hex digits as a hexadecimal number. If this number is zero, or is greater than the maximum allowed codepoint, return U+FFFD REPLACEMENT CHARACTER (�). Otherwise, return the character with that codepoint.
    anything else
    Return the current input character.

    4.4.2. Check if two characters are a valid escape

    This algorithm can be called explicitly with two characters, or can be called with the input stream itself. In the latter case, the two characters in question are the current input character and the next input character, in that order. This algorithm does not consume any characters.

    If the first character is not U+005D REVERSE SOLIDUS (\), return false.

    Otherwise, if the second character is a newline or EOF character, return false.

    Otherwise, return true.

    4.4.3. Check if three characters would start an identifier

    This algorithm can be called explicitly with three characters, or can be called with the input stream itself. In the latter case, the three characters in question are the current input character and the next two input characters, in that order. This algorithm does not consume any characters.

    Look at the first character:

    U+002D HYPHEN-MINUS
    If the second character is a name-start character or the second and third characters are a valid escape, return true. Otherwise, return false.
    name-start character
    Return true.
    U+005C REVERSE SOLIDUS (\)
    If the first and second characters are a valid escape, return true. Otherwise, return false.

    4.4.4. Consume a sequence of name characters

    This algorithm consumes a sequence of name characters and escape characters, and returns the sequence after decoding escapes.

    Initialize result to an empty sequence of characters.

    Repeat until directed to stop:

    1. Look at the next input character.
      name character
      Consume the character and append it to result.
      U+005C REVERSE SOLIDUS (\)
      If the input stream starts with a valid escape, consume and discard the REVERSE SOLIDUS. Then consume an escaped character, and append the returned character to result. Otherwise, stop repeating these steps.
      anything else
      Stop repeating these steps.

    Return result.

    4.4.5. Check if three characters would start a number

    This algorithm can be called explicitly with three characters, or can be called with the input stream itself. In the latter case, the three characters in question are the current input character and the next two input characters, in that order. This algorithm does not consume any characters.

    Look at the first character:

    U+002B PLUS SIGN (+)
    U+002D HYPHEN-MINUS (-)
    If the second character is a digit, return true.

    Otherwise, if the second character is a U+002E FULL STOP (.) and the third character is a digit, return true.

    Otherwise, return false.

    U+002E FULL STOP (.)
    If the second character is a digit, return true. Otherwise, return false.
    digit
    Return true.
    anything else
    Return false.

    4.4.6. Consume a numeric token

    This algorithm is only invoked when the next three input characters would start a number. It returns one token, which may be a number, a dimension, or a percentage.

    Create a number token with its representation initially set to the empty string and its type flag initially set to "integer".

    If the next input character is U+002B PLUS SIGN (+) or U+002D HYPHEN-MINUS (-), consume it and append it to the number token's representation.

    If the next input character is now U+002E FULL STOP (.), consume it, append it to the number token's representation, and set the number token's type flag to "number".

    At this point, if the next input character is not a digit, that indicates an error either in the spec or the implementation.

    While the next input character is a digit, consume it and append it to the number token's representation.

    If the number token's type flag is "integer", and the next 2 input characters are now U+002E FULL STOP (.) followed by a digit, then:

    1. Set the number token's type flag to "number".
    2. Consume the FULL STOP and append it to the number token's representation.
    3. While the next input character is a digit, consume it and append it to the number token's representation.

    If the next input character is now U+0045 LATIN CAPITAL LETTER E (E) or U+0065 LATIN SMALL LETTER E (e), and the second next input character is a digit, then:

    1. Set the number token's type flag to "number".
    2. Consume the LETTER E and append it to the number token's representation.
    3. While the next input character is a digit, consume it and append it to the number token's representation.

    Set the number token's value from its representation.

    At this point the number token is fully constructed.

    If the next 3 input characters would start an identifier, then:

    1. Convert the number token into a dimension token, preserving its value, representation, and type flag.
    2. Consume a sequence of name characters. Set the dimension token's unit to the returned sequence of characters. Return the dimension token.

    Otherwise, if the next input character is U+0025 PERCENT SIGN (%), consume it. Convert the number token into a percentage token, preserving its value and representation. Return the percentage token.

    Otherwise, return the number token.

    4.4.7. Set a number token's value

    Divide the number token's representation into seven components, in order from left to right:

    1. A sign: a single U+002B PLUS SIGN (+) or U+002D HYPHEN-MINUS (-), or the empty string. Let s be the number −1 if sign is U+002D HYPHEN-MINUS; otherwise let s be the number 1.
    2. An integer part: zero or more digits. If there is at least one digit, let i be the number formed by interpreting integer part as a base-10 integer; otherwise let i be 0.
    3. A decimal point: a single U+002E FULL STOP (.), or the empty string.
    4. A fractional part: zero or more digits. If there is at least one digit, let f be the number formed by interpreting fractional part as a base-10 integer, and let d be the number of digits in fractional part; otherwise let f and d be 0.
    5. An exponent indicator: a single U+0045 LATIN CAPITAL LETTER E (E) or U+0065 LATIN SMALL LETTER E (e), or the empty string.
    6. An exponent sign: a single U+002B PLUS SIGN (+) or U+002D HYPHEN-MINUS (-), or the empty string. Let t be the number −1 if exponent sign is U+002D HYPHEN-MINUS; otherwise let s be the number 1.
    7. An exponent: zero or more digits. If there is at least one digit, let e be the number formed by interpreting exponent as a base-10 integer; otherwise let e be 0.

    At least one of the integer part and fractional part will be nonempty. If the number token's type flag is "integer", the fractional part and exponent will be empty.

    Set the number token's value to s × (i + f ·10d) × 10te. (Recall that 100 = 1.) This calculation is to be performed as if all variables were mathematical real numbers (i.e. with infinite precision and range). When the result is mathematically an integer, it must be represented exactly, within at least the range representable by a 32-bit twos-complement signed integer. Otherwise, the result must be represented with at least the range and precision of an IEEE 754 single-precision floating-point number.

    Defer to Values and Units on numeric accuracy?

    4.4.8. Set a unicode-range token's range

    This section describes how to set a unicode-range token's range so that the range it describes is within the supported range of unicode characters.

    It assumes that the start of the range has been defined, the end of the range might be defined, and both are non-negative integers.

    If the start of the range is greater than the maximum allowed codepoint, the unicode-range token's range is empty.

    If the end of the range is defined, and it is less than the start of the range, the unicode-range token's range is empty.

    If the end of the range is not defined, the unicode-range token's range is the single character whose codepoint is the start of the range.

    Otherwise, if the end of the range is greater than the maximum allowed codepoint, change it to the maximum allowed codepoint. The unicode-range token's range is all characters between the character whose codepoint is the start of the range and the character whose codepoint is the end of the range.

    4.5. Changes from CSS 2.1 Tokenizer

    This section is non-normative.

    Note that the point of this spec is to match reality; changes from CSS2.1's tokenizer are nearly always because the tokenizer specified something that doesn't match actual browser behavior, or left something unspecified. If some detail doesn't match browsers, please let me know as it's almost certainly unintentional.

    1. The prefix-match, suffix-match, and substring-match tokens have been imported from Selectors 3.
    2. The BAD-URI token (now bad-url) is "self-contained". In other words, once the tokenizer realizes it's in a bad-url rather than a url token, it just seeks forward to look for the closing ), ignoring everything else. This behavior is simpler than treating it like a FUNCTION token and paying attention to opened blocks and such. Only WebKit exhibits this behavior, but it doesn't appear that we've gotten any compat bugs from it.
    3. The comma token has been added.
    4. The number, percentage, and dimension tokens have been changed to include the preceding +/- sign as part of their value (rather than as a separate DELIM token that needs to be manually handled every time the token is mentioned in other specs). The only consequence of this is that comments can no longer be inserted between the sign and the number.
    5. Scientific notation is supported for numbers/percentages/dimensions to match SVG, per WG resolution.
    6. The column token has been added, to keep Selectors parsing in single-token lookahead.

    5. Parsing

    The input to the parsing stage is a stream or list of tokens from the tokenization stage. The output depends on how the parser is invoked, as defined by the entry points listed later in this section. The parser output can consist of at-rules, qualified rules, and/or declarations.

    The parser's output is constructed according to the fundamental syntax of CSS, without regards for the validity of any specific item. Implementations may check the validity of items as they are returned by the various parser algorithms and treat the algorithm as returning nothing if the item was invalid according to the implementation's own grammar knowledge, or may construct a full tree as specified and "clean up" afterwards by removing any invalid items.

    The items that can appear in the tree are a mixture of basic tokens and new objects:

    at-rule
    An at-rule has a name, a prelude consisting of a list of component values, and an optional value consisting of an simple {} block.

    This specification places no limits on what an at-rule's value may contain. Individual at-rules must define whether they accept a value, and if so, how to parse it (preferably using one of the parser algorithms or entry points defined in this specification).

    qualified rule
    A qualified rule has a prelude consisting of a list of component values, and a value consisting of a list of at-rules or declarations.

    Most qualified rules will be style rules, where the prelude is a selector.

    declaration
    A declaration has a name, a value consisting of a list of component values, and an important flag which is initially unset.

    Should we go ahead and generalize the important flag to be a list of bang values? Suggested by Zack Weinburg.

    component value
    A component value is one of the preserved tokens, a function, or a simple block.
    preserved tokens
    Any token produced by the tokenizer except for function tokens, { tokens, ( tokens, and [ tokens.

    The non-preserved tokens listed above are always consumed into higher-level objects, either functions or simple blocks, and so never appear in any parser output themselves.

    function
    A function has a name and a value consisting of a list of component values.
    simple block
    A simple block has an associated token (either a [, (, or { token) and a value consisting of a list of component values.

    5.1. Parser Railroad Diagrams

    This section is non-normative.

    This section presents an informative view of the parser, in the form of railroad diagrams. Railroad diagrams are more compact than a state-machine, but often easier to read than a regular expression.

    These diagrams are informative and incomplete; they describe the grammar of "correct" stylesheets, but do not describe error-handling at all. They are provided solely to make it easier to get an intuitive grasp of the syntax.

    Stylesheet
    WHITESPACECDCCDOQualified ruleAt-rule
    At-rule
    AT-KEYWORDComponent value{Rule list}{Declaration/at-rule list}SEMICOLON
    Qualified rule
    Component value{Declaration/at-rule list}
    Rule list
    Qualified ruleAt-ruleWHITESPACE
    Declaration/at-rule list
    ws*Declaration;Declaration/at-rule listAt-ruleDeclaration/at-rule list
    Declaration
    IDENTws*COLONws*Component value!important
    !important
    DELIM(!)ws*IDENT(important)ws*
    ws*
    WHITESPACE
    Component value
    IDENTAT-KEYWORDHASHSTRINGURLDELIMNUMBERPERCENTAGEDIMENSIONUNICODE-RANGEINCLUDE-MATCHDASH-MATCHPREFIX-MATCHSUFFIX-MATCHSUBSTRING-MATCHWHITESPACECDOCDCCOLONSEMICOLONCOMMA{} block() block[] blockFunction block
    {} block
    {Component value}
    () block
    (Component value)
    [] block
    [Component value]
    Function block
    FUNCTIONComponent value)

    5.2. Definitions

    current input token
    The token or component value currently being operated on, from the list of tokens produced by the tokenizer.
    next input token
    The token or component value following the current input token in the list of tokens produced by the tokenizer. If there isn't a token following the current input token, the next input token is an EOF token.
    EOF token
    A conceptual token representing the end of the list of tokens. Whenever the list of tokens is empty, the next input token is always an EOF token.
    reconsume the current input token
    Push the current input token back onto the list of tokens produced by the tokenizer, so that the next time a mode instructs you to consume the next input token, it will instead reconsume the current input token.
    ASCII case-insensitive
    When two strings are to be matched ASCII case-insensitively, temporarily convert both of them to ASCII lower-case form by adding 32 (0x20) to the value of each codepoint between U+0041 LATIN CAPITAL LETTER A (A) and U+005A LATIN CAPITAL LETTER Z (Z), inclusive, and then compare them on a codepoint-by-codepoint basis.

    5.3. Parser Entry Points

    The algorithms defined in this specification can be invoked in multiple ways to convert a stream of text into various CSS concepts.

    All of the algorithms defined in this section begin in the parser. It is assumed that the input preprocessing and tokenization steps have already been completed, resulting in a stream of tokens.

    Other specs can define additional entry points for their own purposes.

    The following notes should probably be translated into normative text in the relevant specs, hooking this spec's terms:

    Are there any other things somewhere where some tech (that isn't straight CSS itself) needs to parse some text into CSS?

    All of the algorithms defined in this spec may be called with either a list of tokens or of component values. Either way produces an identical result.

    5.3.1. Parse a stylesheet

    To parse a stylesheet from a stream of tokens:

    1. Create a new stylesheet.
    2. Consume a list of rules from the stream of tokens, with the top-level flag set.
    3. Assign the returned value to the stylesheet's value.
    4. Return the stylesheet.

    5.3.2. Parse a rule

    To parse a rule from a stream of tokens:

    1. Consume whitespace tokens from the token stream until a non-whitespace token is encountered.
    2. If the current input token is a CDO token, CDC token, or EOF token, return a syntax error.

      Otherwise, if the current input token is an at-keyword token, consume an at-rule.

      Otherwise, consume a qualified rule. If nothing was returned, return a syntax error.

    3. Consume whitespace tokens from the token stream until a non-whitespace token is encountered.
    4. If the current input token is an EOF token, return the rule obtained in step 2. Otherwise, return a syntax error.

    5.3.3. Parse a list of declarations

    To parse a list of declarations:

    1. Consume a list of declarations. If anything was returned, return it.

    5.3.4. Parse a component value

    To parse a component value:

    1. Discard whitespace tokens from the token stream until a non-whitespace token is reached. If the token stream is exhausted without finding a non-whitespace token, return a syntax error.
    2. Consume a component value. If nothing is returned, return a syntax error.
    3. Discard whitespace tokens from the token stream until a non-whitespace token is reached. If the token stream is exhausted without finding a non-whitespace token, return the value found in the previous step. Otherwise, return a syntax error.

    5.3.5. Parse a list of component values

    To parse a list of component values:

    1. Repeatedly consume a component value until an EOF token is returned, appending the returned values into a list. Return the list.

    5.3.6. Parse a comma-separated list of component values

    To parse a comma-separated list of component values:

    1. Initialize val to an empty list of lists of component values, and temp to an empty list of component values.
    2. Repeatedly consume a component value, appending the returned values to temp, until either a comma token or EOF token is returned.
    3. If a comma token is encountered, do not append it to temp. Instead, append temp to val. Create a new temp, and return to step 2.
    4. If an EOF token is encountered, append temp to val, and return val.

    5.4. Parser Algorithms

    The following algorithms comprise the parser. They are called by the parser entry points above.

    These algorithms may be called with a list of either tokens or of component values. (The difference being that some tokens are replaced by functions and simple blocks in a list of component values.) Similar to how the input stream returned EOF characters to represent when it was empty during the tokenization stage, the lists in this stage must return an EOF token when the next token is requested but they are empty.

    An algorithm may be invoked with a specific list, in which case it consumes only that list (and when that list is exhausted, it begins returning EOF tokens). Otherwise, it is implicitly invoked with the same list as the invoking algorithm.

    5.4.1. Consume a list of rules

    Create an initially empty list of rules.

    Repeatedly consume the next input token:

    whitespace token
    Do nothing.
    EOF token
    Return the list of rules.
    cdo token
    cdc token
    If the top-level flag is set, do nothing.

    Otherwise, reconsume the current input token. Consume a qualified rule. If anything is returned, append it to the list of rules.

    at-keyword token
    Reconsume the current input token. Consume an at-rule. If anything is returned, append it to the list of rules.
    anything else
    Reconsume the current input token. Consume a qualified rule. If anything is returned, append it to the list of rules.

    5.4.2. Consume an at-rule

    Create a new at-rule with its name set to the value of the current input token, its prelude initially set to an empty list, and its value initially set to nothing.

    Repeatedly consume the next input token:

    semicolon token
    EOF token
    Return the at-rule.
    { token
    Consume a simple block and assign it to the at-rule's value. Return the at-rule.
    simple block with an associated token of {
    Assign the block to the at-rule's value. Return the at-rule.
    anything else
    Consume a component value. Append the returned value to the at-rule's prelude.

    5.4.3. Consume a qualified rule

    Create a new qualified rule with its prelude initially set to an empty list, and its value initially set to nothing.

    Repeatedly consume the next input token:

    EOF token
    This is a parse error. Return nothing.
    { token
    Consume a simple block. Consume a list of declarations from the block's value. If anything was returned, assign it to the qualified rule's value. Return the qualified rule.
    simple block with an associated token of {
    Consume a list of declarations from the block's value. If anything was returned, assign it to the qualified rule's value. Return the qualified rule.
    anything else
    Consume a component value. Append the returned value to the qualified rule's prelude.

    5.4.4. Consume a list of declarations

    Create an initially empty list of declarations.

    Repeatedly consume the next input token:

    whitespace token
    semicolon token
    Do nothing.
    EOF token
    Return the list of declarations.
    at-keyword token
    Consume an at-rule. Append the returned rule to the list of declarations.
    ident token
    Initialize a temporary list initially filled with the current input token. Repeatedly consume a component value from the next input token until a semicolon token or EOF token is returned, appending all of the returned values up to that point to the temporary list. Consume a declaration from the temporary list. If anything was returned, append it to the list of declarations.
    anything else
    This is a parse error. Repeatedly consume a component value from the next input token until it is a semicolon token or EOF token.

    5.4.5. Consume a declaration

    Create a new declaration with its name set to the value of the current input token.

    Repeatedly consume whitespace tokens until a non-whitespace token is reached. If this token is anything but a colon token, this is a parse error. Return nothing.

    Otherwise, repeatedly consume a component value from the next input token until an EOF token is reached, appending all of the returned values up to that point to the declaration's value.

    If the last two non-whitespace tokens in the declaration's value are a delim token with the value "!" followed by an ident token with a value that is an ASCII case-insensitive match for "important", remove them from the declaration's value and set the declaration's important flag to true.

    Return the declaration.

    5.4.6. Consume a component value

    This section describes how to consume a component value.

    If the current input token is a {, [, or ( token, consume a simple block and return it.

    Otherwise, if the current input token is a function token, consume a function and return it.

    Otherwise, return the current input token.

    5.4.7. Consume a simple block

    This section describes how to consume a simple block.

    The ending token is the mirror variant of the current input token. (E.g. if it was called with [, the ending token is ].)

    Create a simple block with its associated token set to the current input token.

    Repeatedly consume the next input token and process it as follows:

    EOF token
    ending token
    Return the block.
    anything else
    Consume a component value and append it to the value of the block.

    5.4.8. Consume a function

    This section describes how to consume a function.

    Create a function with a name equal to the value of the current input token, and with a value which is initially an empty list.

    Repeatedly consume the next input token and process it as follows:

    EOF token
    ) token
    Return the function.
    anything else
    Consume a component value and append the returned value to the value of the current argument.

    5.5. Changes from CSS 2.1 Core Grammar

    This section is non-normative.

    Note that the point of this spec is to match reality; changes from CSS2.1's Core Grammar are nearly always because the Core Grammar specified something that doesn't match actual browser behavior, or left something unspecified. If some detail doesn't match browsers, please let me know as it's almost certainly unintentional.

    1. The handling of some miscellanous "special" tokens (like an unmatched } token) showing up in various places in the grammar has been specified with some reasonable behavior shown by at least one browser. Previously, stylesheets with those tokens in those places just didn't match the stylesheet grammar at all, so their handling was totally undefined. Specifically:

    6. The An+B microsyntax

    Several things in CSS, such as the ‘:nth-child()’ pseudoclass, need to indicate indexes in a list. The An+B microsyntax is useful for this, allowing an author to easily indicate single elements or all elements at regularly-spaced intervals in a list.

    The An+B notation defines an integer step (A) and offset (B), and represents the An+Bth elements in a list, for every positive integer or zero value of n, with the first element in the list having index 1 (not 0).

    For values of A and B greater than 0, this effectively divides the list into groups of A elements (the last group taking the remainder), and selecting the Bth element of each group.

    The An+B notation also accepts the ‘even’ and ‘odd’ keywords, which have the same meaning as ‘2n’ and ‘2n+1’, respectively.

    Examples:

    2n+0   /* represents all of the even elements in the list */
    even   /* same */
    4n+1   /* represents the 1st, 5th, 9th, 13th, etc. elements in the list */

    The values of A and B can be negative, but only the positive results of An+B, for n ≥ 0, are used.

    Example:

    -n+6   /* represents the first 6 elements of the list */

    If both A and B are 0, the pseudo-class represents no element in the list.

    6.1. Informal Syntax Description

    This section is non-normative.

    When A is 0, the An part may be omitted (unless the B part is already omitted). When An is not included and B is non-negative, the ‘+’ sign before B (when allowed) may also be omitted. In this case the syntax simplifies to just B.

    Examples:

    0n+5   /* represents the 5th element in the list */
    5      /* same */

    When A is 1 or -1, the 1 may be omitted from the rule.

    Examples:

    The following notations are therefore equivalent:

    1n+0   /* represents all elements in the list */
    n+0    /* same */
    n      /* same */

    If B is 0, then every Ath element is picked. In such a case, the +B (or -B) part may be omitted unless the A part is already omitted.

    Examples:

    2n+0   /* represents every even element in the list */
    2n     /* same */

    Whitespace is permitted on either side of the ‘+’ or ‘-’ that separates the An and B parts when both are present.

    Valid Examples with white space:

    3n + 1
    +3n - 2
    -n+ 6
    +6

    Invalid Examples with white space:

    3 n
    + 2n
    + 2

    6.2. The <an+b> type

    The An+B notation was originally defined using a slightly different tokenizer than the rest of CSS, resulting in a somewhat odd definition when expressed in terms of CSS tokens. This section describes how to recognize the An+B notation in terms of CSS tokens (thus defining the <an+b> type for CSS grammar purposes), and how to interpret the CSS tokens to obtain values for A and B.

    The <an+b> type is defined as:

    <an+b> =
      odd | even |
      <integer> |
    
      <n-dimension> |
      '+'? n |
      -n |
    
      <ndashdigit-dimension> |
      '+'? <ndashdigit-ident> |
      <dashndashdigit-ident> |
    
      <n-dimension> <signed-integer> |
      '+'? n <signed-integer> |
      -n <signed-integer> |
    
      <n-dimension> ['+' | '-'] <signless-integer>
      '+'? n ['+' | '-'] <signless-integer> |
      -n ['+' | '-'] <signless-integer>

    where:

    The clauses of the production are interpreted as follows:

    odd
    A is 2, B is 1.
    even
    A is 2, B is 0.
    <integer>
    A is 0, B is the integer.
    <n-dimension>
    '+'? n
    -n
    A is the dimension's value, 1, or -1, respectively. B is 0.
    <ndashdigit-dimension>
    '+'? <ndashdigit-ident>
    <dashndashdigit-ident>
    A is the dimension's value, 1, or -1, respectively. B is the dimension's unit or ident's representation, as appropriate, with the first two characters removed and the remainder interpreted as a base-10 number.
    <n-dimension> <signed-integer>
    '+'? n <signed-integer>
    -n <signed-integer>
    A is the dimension's value, 1, or -1, respectively. B is the integer.
    <n-dimension> ['+' | '-'] <signless-integer>
    '+'? n ['+' | '-'] <signless-integer>
    -n ['+' | '-'] <signless-integer>
    A is the dimension's value, 1, or -1, respectively. B is the integer. If a ‘-’ was provided between the two, B is instead the negation of the integer.

    7. Serialization

    This specification does not define how to serialize CSS in general, leaving that task to the CSSOM and individual feature specifications. However, there is one important facet that must be specified here regarding comments, to ensure accurate "round-tripping" of data from text to CSS objects and back.

    The tokenizer described in this specification does not produce tokens for comments, or otherwise preserve them in any way. Implementations may preserve the contents of comments and their location in the token stream. If they do, this preserved information must have no effect on the parsing step, but must be serialized in its position as "/*" followed by its contents followed by "*/".

    If the implementation does not preserve comments, it must insert the text "/**/" between the serialization of adjacent tokens when the two tokens are of the following pairs:

    The preceding pairs of tokens can only be adjacent due to comments in the original text, so the above rule reinserts the minimum number of comments into the serialized text to ensure an accurate round-trip. (Roughly. The delim token rules are slightly too powerful, for simplicity.)

    7.1. Serializing <an+b>

    Define this.

    8. Conformance

    8.1. Document conventions

    Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

    All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

    Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example", like this:

    This is an example of an informative example.

    Informative notes begin with the word “Note” and are set apart from the normative text with class="note", like this:

    Note, this is an informative note.

    8.2. Conformance classes

    Conformance to CSS Syntax Module Level 3 is defined for three conformance classes:

    style sheet
    A CSS style sheet.
    renderer
    A UA that interprets the semantics of a style sheet and renders documents that use them.
    authoring tool
    A UA that writes a style sheet.

    A style sheet is conformant to CSS Syntax Module Level 3 if it is syntactically valid according to this module.

    A renderer is conformant to CSS Syntax Module Level 3 if it parses a stylesheet according to this module.

    An authoring tool is conformant to CSS Syntax Module Level 3 if it writes style sheets that are syntactically valid according to this module.

    Acknowledgments

    Thanks for feedback and contributions from David Baron, 呂康豪 (Kang-Hao Lu), and Simon Sapin.

    References

    Normative references

    [RFC2119]
    S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. Internet RFC 2119. URL: http://www.ietf.org/rfc/rfc2119.txt

    Other references

    [SELECT]
    Tantek Çelik; et al. Selectors Level 3. 29 September 2011. W3C Recommendation. URL: http://www.w3.org/TR/2011/REC-css3-selectors-20110929/

    Index

    Property index

    Property Values Initial Applies to Inh. Percentages Media