Re: XHTML Invalidity / WML2 / New XHTML 1.1 Attribute

Dan,

Twenty years ago on www-html you taught me how to extend
HTML according to the current state of the art. I thought
it would be fun to review the situation twenty years on.

Email quotes below are from the original post here:

https://lists.w3.org/Archives/Public/www-html/2000Aug/0052.html

> I think all of this XML/RDF/schema complexity is needed
> to keep the whole marketplace of technologies stable --
> to facilitate innovation without putting stable
> technologies at risk.

The argument was that XML offers a net advantage to authors
of HTML. W3C HTML 5.2 still has an XHTML concrete syntax:

https://www.w3.org/TR/2017/REC-html52-20171214/introduction.html#html-vs-xhtml

And WHAT WG HTML has what is called an XML syntax:

    The XML syntax for HTML was formerly referred to as
    "XHTML", but this specification does not use that term
    (among other reasons, because no such term is used for
    the HTML syntaxes of MathML and SVG).

https://html.spec.whatwg.org/multipage/xhtml.html#the-xhtml-syntax

According to the Memorandum of Understanding Between W3C
and WHATWG, the two divergent specifications will merge,
which is likely to mean the W3C supported deprecation of
the term "XHTML":

https://www.w3.org/2019/04/WHATWG-W3C-MOU.html

That XML offers a net advantage to authors of HTML was not
borne out by usage. Statistics from Common Crawl show that
in 2020 fewer than 0.04% pages are served as XHTML, which
is 1 in 2500.

https://commoncrawl.github.io/cc-crawl-statistics/plots/mimetypes

The Apache Tika table shows that 20% of text/html pages
contain XHTML magic strings. XML syntax was not checked.

https://tika.apache.org/1.24.1/detection.html

Despite the unpopularity of XHTML, the RDF and schema
components of the "XML/RDF/schema complexity" are still
used. Popular ways to extend HTML with metadata include
W3C's RDFa Lite and WHAT WG's Microdata. Only the former is
based on RDF; both use schemata. JSON-LD is also sometimes
embedded into HTML.

> If/when folks get comfortable with schemas and
> namespaces, we can drop the DTD gobbledygook

This was nearly true: <!doctype html> had to be preserved
in HTML5 to avoid activating quirks mode. Both RDFa Lite
and Microdata offer namespacing, but not using xmlns.

In our 2000 discussion there was much focus on proper
namespacing of syntactic extensions, and of grounding them
in the web. Deployment of semantics, mediated via RDF, took
a backseat in the discussion, and deploying behaviour was
barely mentioned at all.

> I'm asking for [...] some faith that XML Schemas are
> going to become mature for validation purposes.

This didn't happen.

The greatest advances in the field were RELAX and RELAX
NG. RDF 1.1 for example contains an informative RELAX NG
schema, but no XML Schema. James Clark's nxml-mode was
particularly impressive.

Due to the relative unpopularity of XHTML, this method of
syntactically extending HTML has arguably become moot.

In 2020's HTML there exist data-* attributes and custom
elements, which use a hyphen in their tags, as methods of
syntactic extension. These methods are neither namespaced
nor grounded.

> Consider the beginner, somewhat like yourself, who asks,
> "can I have just one extra attribute over here?"  The
> HTML 4.0 answer was: no, not unless you're willing to
> join The HTML Gods and fly to all the WG meetings, or
> convince somebody else to do it for you.  Or unless you
> can implement a new browser and somehow get it deployed
> to a non-trivial user base.

In 2000 I suggested an @comment attribute.

In 2020 for the purposes of this email let's consider an
extension which should be closer to the heart of most
programmers: annotating <code> with a language in the form
of a media type.

The code language could be used for:

* Automatic syntax highlighting, either by server-side
  delivered JS code or by browser extensions.

* Interactive editing, e.g. to provide lightweight IDE
  features when the code is contenteditable.

* Other interaction, such as spawning a compiler or
  interpreter widget in the page.

* Hinting to fragment downloaders what filename extension
  should be used. For example, imagine an extension to
  download a Python <pre><code> section, and being prompted
  to save as a .py file automatically.

In WHAT WG HTML, they mention already having considered the
case of how to mark up a code element's language type, and
suggest using a class value pattern:

    There is no formal way to indicate the language of
    computer code being marked up. Authors who wish to mark
    code elements with the language used, e.g. so that
    syntax highlighting scripts can use the right rules,
    can use the class attribute, e.g. by adding a class
    prefixed with "language-" to the element.

https://html.spec.whatwg.org/multipage/text-level-semantics.html#the-code-element

As with data-* and custom elements on the syntactic side,
this metadata solution is neither namespaced nor grounded
in the web.

What do we have to do to make it a universal solution?

Since the pattern appears in the WHAT WG specification one
could argue that the language-* class namespace is reserved
by the specification and collisions are no longer possible,
but they indicate that this pattern is not formal.

In the wild, we find that client-side JavaScript syntax
highlighting libraries seldom adhere to the WHAT WG HTML
class value pattern convention:

<pre id="codePre" class="sh_js"> (SHJS)
<pre class="brush: js"> (SyntaxHighlighter)
<pre><code data-language="javascript"> (Rainbow)
<pre><code class="hljs javascript"> (highlight.js)
<pre><code class="lang-csharp"> (Prism.js)

None of these libraries use the WHAT WG convention, though
Prism.js is similar with its lang-* prefix.

The WHAT WG suggest to use this class value pattern, but
the same specification also provides Microdata which could
potentially be used as a universal solution. We could also
standardise code types using RDFa Lite. We shall consider
Microdata and RDFa in that order. (A hidden significance to
the order will become apparent.)

This is Microdata using a code itemtype URL:

<pre
  itemscope
  itemtype="https://example.org/code">
<code
  itemprop="type"
  value="text/python">...</code>
</pre>

And this is Microdata using a code/type itemprop URL:

<pre
  itemscope>
<code
  itemprop="https://example.org/code/type"
  value="text/python">...</code>
</pre>

On converting Microdata to RDFa Lite, Manu Sporny (the
editor of RDFa Lite, CCed) writes:

    Over 99% of all Microdata markup in the wild can be
    expressed in RDFa Lite just as easily. This is a
    provable fact – replace all Microdata attributes with
    the equivalent RDFa Lite attributes

http://manu.sporny.org/2012/mythical-differences/

Unfortunately our tiny example falls into the estimated 1%,
because there is no analogue of Microdata's @value in RDFa
Lite. This is why we considered the Microdata example
first. It is possible that @value had not yet been added to
Microdata when Sporny wrote in 2012.

Instead, we could add a container to our RDFa Lite:

<div
  vocab="https://example.org/schema"
  typeof="Code">
<p>Consider this <code
  property="type">text/python</code>:</p>
<pre>
<code
  property="value">...</code>
</pre>

But this has forced us to use markup we would like to avoid.

Another alternative is to use interpretation properties.

https://www.w3.org/DesignIssues/InterpretationProperties.html

In which case the RDFa Lite becomes:

<div
  vocab="https://example.org/schema"
  typeof="Code">
<pre>
<code
  property="text/python">...</code>
</pre>

This compares well to the Microdata, at the cost of the
unbounded set of properties that must be defined in the
schema.

The situation is worse if we expand upon the idea of code
types hinting at a filename extension for download purposes
by adding the whole author-defined filename.

Imagine we want to indicate that our example Python code,
which is valid Python3, should be downloaded with the
filename "ellipsis.py" by default.

In Microdata this would mean using two itemprops and two
values on the same <code> element, which is impossible.

We could hoist a pair to the <pre> in the <pre> case:

<pre
  itemscope
  itemtype="https://example.org/code"
  itemprop="filename"
  value="ellipsis.py">
<code
  itemprop="type"
  value="text/python">...</code>
</pre>

Using similar markup for the itemprop URL example, not
shown here. But imagine if we were to use <code> in a <p>
instead:

<p>In Python one can use <span
  itemscope
  itemtype="https://example.org/code"
  itemprop="filename"
  value="ellipsis.py"><code
  itemprop="type"
  value="text/python">...</code></span>
as a noop.</p>

Should we be using a <span>, or should it be <tt> for
greater analogy with <pre>? Why is <tt> considered
presentational but <pre> not so?

What if we add more properties? For example, the unix
permissions of the downloaded file, or the mtime of the
downloaded file. Then we have to add more layers to the
<span><code> onion. The verbosity becomes unbearable.

The situation is similar for RDFa Lite, except that it
incurs the extra verbosity that we must express all of
these properties as readable text in the prose.

All of this contrasts sharply to the idiomatic and yet
concise solutions that the JS syntax highlighters have
chosen.

Another alternative is to invent markup. In the 2000
discussion, it was observed that Robin Cover had been
adding non-standard syntax directly to his HTML "for
years":

> the popular HTML implementations deal with these extra
> attributes just fine. But anybody who depends on them is
> taking a bit of a risk... W3C doesn't actually Recommend
> that you write such documents. Not yet...

The original Robin Cover approach would resemble:

<p>In Python one can use <code type="text/python"
filename="ellipsis.py">...</code> as a noop.</p>

The state of the art has improved here: data-* attributes
allow authors to avoid collisions with attributes not yet
added to HTML, and also allow their values to be accessible
via the DOM.

<p>In Python one can use <code data-type="text/python"
data-filename="ellipsis.py">...</code> as a noop.</p>

But, like the class value pattern, and unlike RDFa and
Microdata, these markup extensions are neither namespaced
nor grounded in the web.

> the HTML modularization spec shows you how to add your
> own module and mix it in with the standard modules. I
> don't care for that approach, because it's limited in all
> the ways that linking two C modules are limited: one big
> unmanaged centralized namespace

XHTML Modularisation was extremely impressive work by
Murray Altheim. The structure, and the execution of the
idea, are works of art.

Despite this, XHTML 1.1, the main application of
Modularisation, did not go on to achieve significant
popularity, even in the wider domain of HTML with XML
syntax.

> The other way is XML Schemas. I demonstrated how this
> works: just write a little schema, stick it in the web,
> point to it from your document, and off you go.

Both potential solutions discussed in 2000, XHTML
Modularisation and XML Schema, depend on XHTML. Only the
latter was adequately namespaced and grounded. It also had
the advantage over Microdata and RDFa Lite of allowing
multiple extension attributes on one element whilst
remaining relatively concise.

In short, Microdata and RDFa Lite are a syntactic step
backwards for extensibility, but were designed to fit the
wider context of lenient HTML parsing.

The step backwards has practical consequences. The W3C
Accessible Platform Architectures WG are working on a
vocabulary to enrich the accessibility of HTML. They
considered a range of approaches to add this vocabulary to
HTML, including Microdata and RDFa Lite:

https://github.com/w3c/personalization-semantics/wiki/Comparison-of-ways-to-use-vocabulary-in-content

But instead they chose to use data-* attributes.

https://www.w3.org/TR/personalization-semantics-1.0/#technology-comparison-summary

In this case even a W3C working group does not feel
comfortable using RDFa Lite. Their comparison document
lists many shortcomings, including:

    * Weak multi platform support. Not well bound to the
      DOM and can not support simple implementations such
      as CSS selectors or media queries.

    * Authors have a lot more attributes to fill in. This
      also adds to the [likelihood] of errors.

Universally namespaced attributes could easily be added to
HTML by analogy with Java package identifiers. They would
incur the significant limitations of DNS, but the impact is
limited since most namespaces already use the DNS.

Attributes could start with ns- to indicate that they are
namespaced, and then be followed with a reverse domain name
replacing dot with hyphen, and hyphen with two hyphens
(which works because dots cannot be contiguous). This would
mean, for example, that the APA WG's attributes:

data-aui-field
data-aui-moreinfo
data-aui-symbol
data-aui-importance

Would become:

ns-org-w3-aui-field
ns-org-w3-aui-moreinfo
ns-org-w3-aui-symbol
ns-org-w3-aui-importance

The browser implementation of ns-* would involve simply
treating them as though they were data-* attributes.

> So I think XHTML, XML, namespaces, and schemas are a good
> mix... they make the easy things easy and the hard things
> possible.

In retrospect this may have been an underestimation of the
demand for lenient HTML parsing. Perhaps browsers should
have presented the option to render broken XHTML as HTML,
similar to manually allowing a broken HTTPS certificate, to
make XHTML more popular.

> > 2) How do you suggest that we employ RDF into XHTML 1.0
> > and still have it validate as a document?

There have been many solutions proposed since I asked this
twenty years ago. RDFa Lite and Microdata can be converted
to RDF. For direct embedding, JSON-LD in <script> is a
popular approach in 2020; one that I documented and perhaps
even devised in 2002:

http://infomesh.net/2002/rdfinhtml/#objectOrScript

Some tools support all three:

* "JSON-LD, RDFa and Microdata are all supported."
  https://seoscout.com/tools/schema-generator

* "A PHP library to read Microdata, RDFa Lite & JSON-LD
  structured data in HTML pages."
  https://github.com/brick/structured-data

HyperRDF and its successor, GRDDL, is another memorable, if
computationally expensive, approach to the problem:

https://www.w3.org/2000/07/hs78/
https://www.w3.org/TR/grddl/

In 2009 Toby Inkster proposed to embed Turtle in <script>,
analogously to the later JSON-LD:

https://lists.w3.org/Archives/Public/semantic-web/2009Aug/0057.html
https://www.w3.org/wiki/N3inHTML
https://metacpan.org/pod/HTML::Embedded::Turtle

> > What is the point of having RDF/Schemas to add one
> > attribute?

Subsequently, data-* allowed adding attributes without a
schema, but without adequate namespacing. The ns-* prefix
would be a simple solution.

In practice, data-* attributes are used far more widely
than anything involving XML validation, and therefore ns-*
attributes would be positioned to attain similar success.

> <schema xmlns='http://www.w3.org/1999/XMLSchema'
>   targetNamespace='http://www.w3.org/2000/08/comment#'>
> <attribute name="comment"/>
> </schema>

$ curl -Is http://www.w3.org/2000/08/comment | head -n 1
HTTP/1.1 404 Not Found

The Web Archive indicates that no document was ever
published there:

https://web.archive.org/web/20000918160646/www.w3.org/2000/08/comment

But there is an alternative version:

https://www.w3.org/XML/2000/04schema-hacking/comment

The W3C's persistence policy in general is at threat from
responses to the attacks described in RFC 7258.

https://tools.ietf.org/html/rfc7258

The XHTML namespace redirects to HTTPS, and therefore
depends on HSTS for its integrity.

$ curl -Is http://www.w3.org/1999/xhtml | grep -i Location
location: https://www.w3.org/1999/xhtml/

Architecturally, the redirect is unnecessary.

    The HTTP protocol can and by default is upgraded to use
    TLS without having to use a different URI prefix. The
    https: prefix could even in fact be phased out

https://www.w3.org/DesignIssues/Security-NotTheS.html

HSTS also enables fingerprinting attacks. This is mitigated
by centralisation, in the form of the HSTS preload list
maintained by the browser vendors:

https://hstspreload.org/

Which is not unlike the old HOSTS.TXT mechanism:

https://en.wikipedia.org/wiki/Hosts_(file)

The Domain Name System (DNS) is not a suitable mechanism
for persistent identifiers for many publishers. Solutions
proposed since 2000 include RFC 4151 (tag URIs) and IPFS.

https://tools.ietf.org/html/rfc4151
https://en.wikipedia.org/wiki/InterPlanetary_File_System

Another popular solution compatible with DNS is to use a
centralised provider with an institutional commitment to
persistence, such as PURL or W3ID.

https://purl.org/
https://w3id.org/

The former was once maintained by the OCLC but
maintainership passed to the Web Archive, after a long
period of unavailability, in 2016.

https://blog.archive.org/2016/09/27/persistent-url-service-purl-org-now-run-by-the-internet-archive/

The latter is the work of the W3C Permanent Identifier CG.

https://www.w3.org/community/perma-id/

In the case above, security obviates persistence. But
security on the modern web is of paramount and still
increasing importance. This has significant ramifications
for extensions to HTML. It should not be possible to extend
HTML in such a way that compromises the security or privacy
of the browser user.

Though only a single paragraph of this reply has been
devoted to security, it is probably the single most
important issue for HTML extensibility in 2020.

> RDF is really a whole other story, and I'm not going to
> go into it in depth just now.

(We then spent many years working on RDF together!)

> If you're saying that some languages can't be mixed with
> others, that'll be true in the general case. But for the
> languages W3C is developing, and a large part of the rest
> of the marketplace of markup languages, I don't think it
> is.

The RDF solution to this, expressing data as graphs of URIs
which can then be merged, was satisfying. Unfortunately
blank nodes made graph isomorphism difficult, and no
reliable algorithm was published until URGNA2012 in 2012.

http://aidanhogan.com/docs/rdf-canonicalisation.pdf

This meant that nobody could properly test their RDF tools
for the first decade of RDF. Some libraries in that period
used my heuristic canonicalisation algorithm from 2004.

http://www.w3.org/2001/sw/DataAccess/proto-tests/tools/rdfdiff.py

In all of the main technologies mentioned herein - XML,
RDF, XHTML, XHTML Modularisation, XML Schema, RDFa, and
Microdata - unchecked complexity and legacy clutter have
arguably proven significant obstacles to their adoption.

> you'll see that I'm able to validate all sorts of
> combinations: HTML with SMIL, HTML with SVG, HTML with a
> new comment thingy, HTML with MathML, etc. And all the
> other combinations are just a matter of time to work out
> the details of the example, not new technology.

All of these examples were of one parent language, XHTML,
embedding mutually exclusive self contained child
languages, SMIL, SVG, util:comment, and MathML. The
difficulty emerges when languages interact with one
another, not only syntactically but behaviourally.

This is a problem even with the modern solutions to
metadata enriched HTML. The RDFa Lite and Microdata
specifications cover very similar ground. Why are there two
competing standards? Manu Sporny explains:

    The reason both exist is a very long story involving
    politics, egos, and a fair amount of [dysfunctionality]
    between various standards groups -- all of which
    doesn't have any impact on the actual functionality of
    either language. The bottom line is that we now have
    two languages that do almost exactly the same thing.

http://manu.sporny.org/2012/mythical-differences/

This caused somebody to ask on StackOverflow which of the
two languages they should use. One response indicated that
it was possible to use both of the languages side by
side. This choice was then questioned:

    I'm not certain if unor's suggestion to use both
    Microdata and RDFa is a good idea. If you use Google's
    Structured Data Testing Tool (or other similar tools)
    on his example it shows duplicate data which seems to
    imply that the Google bot would pick up two people
    named John Doe on the webpage instead of one which was
    the original intention.

https://stackoverflow.com/a/32632121

> My experience leads me to believe that parts of XML are
> solid [architectural] infrastructure for the long term:
> tags and attributes, and namespaces. But other parts of
> it are there to manage the transition from the existing
> software base: DTDs, entities, processing instructions,
> and I don't recommend investing them unless you are
> constrained by existing software somehow.

This XML subset was codified by the W3C MicroXML CG in 2012:

https://dvcs.w3.org/hg/microxml/raw-file/tip/spec/microxml.html

One of the editors of the specification, John Cowan, noted
yesterday that it was inspired by Monastic SGML by W. Eliot
Kimber and, later, Simon St. Laurent.

> Note that on the same day we Recommended XML, we released
> a Note that paints the way forward for an extensible,
> self-describing web of languages:
>
>     http://www.w3.org/TR/1998/REC-xml-19980210
>
>     http://www.w3.org/TR/1998/NOTE-webarch-extlang-19980210

On this, John Sowa perceptively observed:

    The "layer cake" diagrams [...] show how the Semantic
    Web evolved from the proposal in 2000 to a widely used
    slide in 2001 and the final report in 2005. In 2000,
    the yellow arrow for the "unifying language for
    classical logic" dominates the diagram. In 2001, the
    box labeled "Logic" has shrunk. In 2005, the box for
    the unifying logic is smaller than the logics it's
    supposed to unify.

http://jfsowa.com/ikl/

The 2000 proposal had suggested SWeLL:

    The proposed project is to utilize and demonstrate the
    great power of adding, on top of the RDF model
    (modified as necessary) the power of KR systems. We
    refer to this augmented language as the Semantic Web
    Logic Language, or SWeLL.

https://www.w3.org/2000/01/sw/DevelopmentProposal

Though first order and even higher order theories were
considered in the early days of RDF, ontologies for the
"extensible, self-describing web of languages" were
eventually standardised Description Logics (DLs), which are
less expressive but decidable.

https://en.wikipedia.org/wiki/Description_logic
https://www.w3.org/TR/owl2-overview/

The algorithmic complexity of DLs can also be tuned.

http://www.cs.man.ac.uk/~ezolin/dl/

Remnants of the older approach, though more oriented
towards the impurity of Prolog than the purity of FOL or
HOL, survived through CWM to the outstanding EYE reasoner
by Jos de Roo.

https://github.com/josd/eye
https://josd.github.io/Papers/EYE.pdf

> Review and endorsement is a great thing, but W3C can and
> should do only so much of it. I expect peer reviewed
> journals of all sizes and shapes to organize around
> good/bad schemas and schema techniques and practices.

This was good advice, but very few resources have been
available since then of this nature. The Semantic Web
Agreement Group was one such attempt. Another was the W3C
ESW wiki. The major findings of both are arguably not well
documented.

> Geocities.com will register your schema too, if all you
> want to do is put it in the web ;-)

Geocities was shut down in 2009, though pages could still
be accessed as late as 2014.

https://en.wikipedia.org/wiki/Yahoo!_GeoCities#Closure

> I'm not a fan of regulation, myself.

Dan and I discussed schema design techniques for many
years, and the issues involved were often subtle but
profound.

Though I no longer support centralised regulation for
schemata, I do feel that as they are designed to be part of
computational systems they should be subject to the same
standards of professionalism that we expect in general.

Peer review, as mentioned above, could help, but techniques
such as fuzzing and static analysis are playing increasing
roles elsewhere. As early as 2001 I wrote a tool that did
some very simple RDF schema linting:

http://infomesh.net/2001/05/rdflint/

And far more complex techniques could be brought to bear on
the problem, if they haven't already.

> > this all supports the text/xhtml MIME type suggestion.
>
> No, I don't see how it does.

The argument here, although very poorly expressed, was that
XHTML could not be served as XML because of the XHTML 1.0
user agent conformance criteria.

https://www.w3.org/TR/xhtml1/#uaconf

And indeed just five months later, application/xhtml+xml was
registered by the HTML WG as RFC 3236.

https://www.ietf.org/rfc/rfc3236.txt

Having been first mentioned on www-html in December 2000:

https://lists.w3.org/Archives/Public/www-html/2000Dec/0167.html

Unfortunately there is no preceding public discussion.

> > At the moment, I think that the util:comment is the
> > best way forward.[ ]I challenge people to think up a
> > better idea, and prove it works.

Twenty years on, we still don't really have a better
solution in terms of the pure technology.

I'm not sure that I would have been thrilled in 2000 even
to write ns-org-w3-2000-comment or somesuch instead of
util:comment or just comment, let alone the amount of
boilerplate required by RDFa Lite and Microdata.

> given that modularization and schemas are still not 100%
> cooked, I think your logo "accurately reflects the STATUS
> associated with the W3C products."

This was the charitable response to my posting a parody
"Valid XHTML" logo with the "HT" crossed out. Unfortunately
the URL did not persist, and was not saved by the Web
Archive.

Though XHTML Modularisation and XML Schema are by now about
as cooked as they will ever be, parody is protected in the
law of the W3C's jurisdiction.

    Parodies are protected under the fair use doctrine of
    United States copyright law, but the defense is more
    successful if the usage of an existing copyrighted work
    is transformative in nature, such as being a critique
    or commentary upon it.

https://en.wikipedia.org/wiki/Parody#United_States

And so in the interests of historical re-creation I present
the following base64 encoded PNG:

iVBORw0KGgoAAAANSUhEUgAAAFgAAAAfCAMAAABUFvrSAAAABGdBTUEAALGPC/xh
BQAAAAFzUkdCAK7OHOkAAABgUExURUdwTN2pVPC3XE5KPZkBAQQCAQNbnf7+/v7L
ZczMzE5OTrOfZ7SzsyVyq+wfIkIuGf/sxmlfUJyPcKHC27iSSiIXD3GkyNrm775I
KLumcu14QnRbLplzO1GPvMrArpMkHXVrIlQAAAABdFJOUwBA5thmAAACZ0lEQVRI
x+WW65KjIBCFByI3N8qqoA5gzfu/5XY36BiTVDKZ/NtTCSKWH+2hG/0wb9f5g2TU
Y/3Vz+uz7/4TMOd8f/J7MGOcmtAGDkf4MRfaltM4tHjxJXASUTNh9Vy1bbCCe2+h
VzEB49oKoTlcPIKHZlTYDkSRskEt0O07o87FCiucFwwmmFsABoi2nYPjQggO9Nvg
STYIphYOWTBXDTcYazIYEQkjtwnRIJtw1IoUvb0NVo2cKFRsF7lME01yFr4/eTGV
xYsQG4KF8BVyZ5/B0Yt4DzxifIAf1kmoW5/gtOuLFUx4tDOJFIoV1cwRB0bwe2CD
AU5SLsWW7Io3+3QDg8FmnTDaL880B0NCwPWMURPYO+cIXFVbVqALaG4Ofsqm+H1W
RDAY7iUcBIimWJiC5TgJjOIA/qqITOAFHn2RjTQK/yMtnar3YI5ZyiGPA9ufcEY1
wimPGabyJ3KRbHIqLEo2gxwnjHppJHrRv1TSc5VVCkRKoBIb8xfNGJR5BbxxC7iR
TXEjV0nOk3tg/gS3gMeyctiq72rZ7RVQti7fjN144PErbgGvuZadIPB0CY52BUOC
nNb+xmNH7roJrdUxqGGhJ1hMJndmq7wC8x6qJR6fn23ceLG7jZS62KK9Ay5ehyOd
OIKxxNYyI4XMC4WbLrfNgTw1WIEN7kHQMb7uvFhL+lHEq9xxPx63dhob2keVqb09
qwtwAuBJsHTl8Sr24zcIFqzH2tUOe/xmloHTPwVzh9K4zWiW3DGTw8p9+1s6FO77
X/+h5PL7vytefEu/8sFyfqw/z6vfwPUTOt3T9ZW6tsT9ByLiXV2qZ2hoAAAAAElF
TkSuQmCC

> --
> Dan Connolly, W3C http://www.w3.org/People/Connolly/

--sbp

Received on Thursday, 13 August 2020 16:05:59 UTC