- From: Tab Atkins Jr. <jackalmage@gmail.com>
- Date: Tue, 29 Mar 2011 08:53:16 -0700
- To: Sam Ruby <rubys@intertwingly.net>
- Cc: HTML WG <public-html@w3.org>
On Tue, Mar 29, 2011 at 7:59 AM, Sam Ruby <rubys@intertwingly.net> wrote: > === Arguments not considered: > > Following are either direct quotes or paraphrases of arguments which > were put forward which were not considered. > > Running examples from the OpenGraph Protocol site through the > facebook linter shows that removing the prefix declaration has no > effect but changing it prevents any properties from being recognised. > Code inspection of some of the other tools indicates that there are > clients in Python, PHP, Ruby and Java that depend on literal matching > of the string "og:". > > No change proposal was put forward suggesting that all usages be > migrated to fixed prefixes. Nor was there any evidence put forward > that fixes to these tools would break content. The fact that these > tools have bugs is uncontested but that, in itself, does not help > identify the proposal that draws the weakest objections. ... > It would be important to know if Facebook's and Google's content > consuming code could be made work with prebound prefixes for > compatibility with legacy content that uses prefixes. > > We only consider proposals which actually were put forward. Neither > change proposal proposed standardizing Facebook's or Google's prefixes. I object to these two arguments not being considered, as they are directly relevant to the "we already have legacy content using prefixes" argument, which was considered to be the strongest argument and thus in need of disproving. If a large fraction of the legacy processing tools do *not* recognize the prefix mechanism, but instead rely on fixed prefixes (that is, just specially-qualified names), then that is strong evidence that prefixes are too complicated, as multiple tools get them completely wrong. Further, if, as a result of multiple tools actually recognizing specially-qualified names instead of names with namespace prefixes, a significant percentage of authored content contains "invalid" RDFa with wrong or missing prefix declarations, that is also strong evidence that prefixes are too complicated, and further, it is strong evidence that the "legacy content" does *not* actually use the prefix mechanism, but instead uses another mechanism to specially-qualify the names (generally, fixed prefixes) which is invalid according to RDFa and which would *fail* to be processed in conformant RDFa processors. That last bit is very important. If we must have RDFa, then I strongly object to any decision which pushes us down the RSS path where no processor is conformant, and the successful processors must use expensive reverse engineering to find the union of non-conformant behaviors which successfully process an appropriately large fraction of legacy content. We're already walking this road, but we change course with appropriate action now. ~TJ
Received on Tuesday, 29 March 2011 15:54:08 UTC