- From: Mark Birbeck <mark.birbeck@webbackplane.com>
- Date: Thu, 7 Oct 2010 11:52:48 +0100
- To: Ivan Herman <ivan@w3.org>
- Cc: RDFa Working Group WG <public-rdfa-wg@w3.org>
Hi Ivan, > I can see the merit of this approach, just adding some more thoughts to the > discussion, though > > - you refer to the restriction of the automatic procedure to <meta>. I can see the > value of that. But, if we do that, don't we reduce the dangers down to a level that > we can simply go with the original approach (ie, no duplication of triples) but with > that restriction? Yes. Certainly if we go the way of converting a literal to a URI then the same argument applies -- that we should consider restricting the behaviour to the <meta> element. However, as I said, this has the weakness that we've now changed the data, rather than augmenting it. > - I think we said last time that we would restrict the mechanism to those uri > schemes that are officially registered by the IETF. Your proposal would restrict > it to http scheme only. I think that is way too restrictive for the user community > we have in mind, which would use, for example, https, ftp, or mailto fairly > frequently, too... I'm not completely wedded to this idea, but I think it's worth being absolutely certain that we want to be this lax. My understanding of the use-case described was not 'hey everyone...let's just make literals and URIs the same', but rather the motivation was that authors wanting to use something like OGP might accidently use @property/@content instead of @rel/@resource/@href. So if that is the use-case there is an argument for looking at the scenarios in which these errors are most likely to arise, and they would seem to be with references to web-sites. Like I say, I'm not saying we shouldn't allow other protocols, but it sounds to me like we might be going beyond what is being asked of us to meet a particular authoring scenario, and that always brings with it its own problems. Regards, Mark
Received on Thursday, 7 October 2010 10:53:47 UTC