- From: Ivan Herman <ivan@w3.org>
- Date: Sun, 16 Oct 2011 11:01:06 +0200
- To: KANZAKI Masahide <mkanzaki@gmail.com>
- Cc: Jeni Tennison <jeni@jenitennison.com>, Gregg Kellogg <gregg@kellogg-assoc.com>, Martin Hepp <martin.hepp@ebusiness-unibw.org>, public-html-data-tf@w3.org
- Message-Id: <FEC039BD-B3A7-4C08-9590-AE0958DCE2D6@w3.org>
On Oct 16, 2011, at 09:37 , KANZAKI Masahide wrote: > [snip] > >> But what about rather than assuming a generic parse followed by some post-processing, if we explicitly left it up to implementations of the algorithm? We could say that each of the various things where knowledge of the vocabulary would make you do things differently is implementation-defined within particular constraints. So we would have something like: >> >> * the _property_URI_creation_method_ is one of X, Y or Z (TBD) and is implementation defined >> * the _datatype_ for a literal value is implementation defined >> * the _multi-value_mapping_ is either _to_a_collection_ or _to_multiple_statements_ and is implementation defined >> >> Implementations themselves would then be free to use whatever method was suitable for them to determine how to set each of these, which might include some combination of: >> >> * having hard-coded knowledge of particular vocabularies >> * looking up what to do from a registry >> * working out what to do based on a schema or ontology >> * having some fixed defaults that will work in 99% of cases >> >> This would provide enough framework that individual implementations didn't each have to reinvent how to do everything, but the ability to insert vocabulary knowledge early in the process and a guarantee (by making it implementation defined rather than implementation determined) that the users of a tool will be informed about the tool's behaviour. >> >> What do you think? Would this work as an approach? > > So, implementation choice (or compatibility parsing method as Martin > suggested) sounds a good starting point. I wonder, however, it would > not very happy for users if different tools generate different RDF > from the same microdata. Maybe some sort of defaults or recommended > methods would be useful. Another possibility is that implementations - should implement both a listed and not listed version of the algorithm - there should be standard flags to direct the processor when performing the transformation RDFa 1.1 already introduced some standard flags like that, though for other purposes (eg, whether error reports should be added to the output graph, or whether the vocabulary expansion should be performed for @vocab). This may work in this case, too. Not ideal, because this is not under the control of the author, but better than a complete open situation... Cheers Ivan > > cheers, > > -- > @prefix : <http://www.kanzaki.com/ns/sig#> . <> :from [:name > "KANZAKI Masahide"; :nick "masaka"; :email "mkanzaki@gmail.com"]. > ---- Ivan Herman, W3C Semantic Web Activity Lead Home: http://www.w3.org/People/Ivan/ mobile: +31-641044153 PGP Key: http://www.ivan-herman.net/pgpkey.html FOAF: http://www.ivan-herman.net/foaf.rdf
Attachments
- application/pkcs7-signature attachment: smime.p7s
Received on Sunday, 16 October 2011 08:59:44 UTC