Re: Last Call Response to ISSUE-73: RDFa Profile management

Shane,

as I said: I do not have very strong feelings about that and I can go whichever way the group decides to go... Manu, let us make a final decision on that asap. (The current, non-final default profiles reflect my views, but that is easy to solve...)

Ivan
On Feb 28, 2011, at 16:44 , Shane McCarron wrote:

> 
> 
> On 2/28/2011 8:47 AM, Ivan Herman wrote:
>> On Feb 28, 2011, at 15:25 , Shane McCarron wrote:
>> 
>>> Well... we have no requirement that all host language default profiles are a superset of the RDFa+XML default profile.  And indeed I can easily imagine a host language (ShameML) where I do not want terms defined in that profile.  Or I do not want prefixes defined in     that profile.  Why would we have such a requirement?
>> In my mind, the intention of the RDFa+XML default profile is to include the prefixes of the most widely used vocabularies in RDF. So I would turn things around: why would ShameML _not_ want to refer to those prefixes? What does it harm? (Note that I agree with your for terms, and I do not think the default profile should include many terms if any).
>> 
>> It is obviously a source of error if we have to repeat the content into two different profiles.
> 
> It is a *potential* source of divergence.  It is a *certain* source of flexibility.
> 
>>> Second, even if we DID have such a requirement, it would certainly be more efficient to just require that the data be included in the     profiles than to have each language processor read / process two profiles every time they parse a document.  Wouldn't it?
>>> 
>> If a processor correctly caches then this does not seem to make a huge difference.
> 
> I actually wasn't talking just about retrieval.  I was also talking about the merge.  I am forced to process a first set of triples from the  XML+RDFa default.  Then process a second set from the host language default (if any) and merge them in.  Then I can finally process my document.  And I have to do this every time?  I guess my implementation could cache the merged collection... but that would be an optimization - not the processing model that is required by the specification.  It just seems like an extra step that we are requiring be taken each and every time a text/html document is processed.  And there are a lot of those.
> 
>> Ivan
>> 
>> 
>>> On 2/28/2011 12:02 AM, Ivan Herman wrote:
>>>> Shane,
>>>> 
>>>> I may not understand what you say. But if I do, this is not a minor issue. Indeed, the question is whether the core default profile is a subset of the xhtml default profile or not. Put it another way, whether all the prefixes and terms defined in the core profile should be repeated in the xhtml profile, too. Manu's approach means that it is unnecessary to do so, in your case it is. I happen to be on Manu's side on this although, as we say, I would not lie down the road...
>>>> 
>>>> Ivan
>>>> 
>>>> ----
>>>> Ivan Herman
>>>> Tel:+31 641044153
>>>> http://www.ivan-herman.net
>>>> 
>>>> 
>>>> 
>>>> On 28 Feb 2011, at 00:47, Shane McCarron<shane@aptest.com>  wrote:
>>>> 
>>>>> Manu,
>>>>> 
>>>>> A minor comment:
>>>>> 
>>>>> On 2/20/2011 1:23 PM, Manu Sporny wrote:
>>>>>> ...
>>>>>> Profile Document Selection Algorithm
>>>>>> ------------------------------------
>>>>>> 
>>>>>> The RDFa WG discussed several algorithms for determining the correct
>>>>>> profile to use. In the end, the simplest and most reliable mechanism
>>>>>> seemed to be to do the following:
>>>>>> 
>>>>>> 1. Always load the RDFa Core 1.1 default profile first.
>>>>>> 2. If an "application/xhtml+xml" or "text/html" MIMEType is detected,
>>>>>>    load the HTML+RDFa 1.1 default profile.
>>>>>> 
>>>>>> Step #1 will be placed into the RDFa Core 1.1 specification. Step #2
>>>>>> will be placed into the (X)HTML Host Language specifications.
>>>>>> 
>>>>> I actually DISAGREE with this.  I think it is more sensible to have the processor determine the media type, then act accordingly.  In fact, we had already introduced text that supports that model [1]:
>>>>> 
>>>>>> A conforming RDFa Processor must examine the media type of a document it is processing to determine the document's Host Language. If the RDFa Processor is unable to determine the media type, or does not support the media type, the RDFa Processor must process the document as if it were media type application/xml. See XML+RDFa Document Conformance.
>>>>> I say this is a minor comment because I believe the effect on document processing is identical - it really just means that an implementation is not required to read / process TWO default profiles in what is likely to be the most common case.  After all, I think we all expect that HTML4 / HTML5 documents are the most prevalent on the network.
>>>>> 
>>>>> 
>>>>> 
>>>>> -- 
>>>>> Shane P. McCarron                          Phone: +1 763 786-8160 x120
>>>>> Managing Director                            Fax: +1 763 786-8180
>>>>> ApTest Minnesota                            Inet:
>>>>> shane@aptest.com
>>>>> 
>>>>> 
>>>>> 
>>> -- 
>>> Shane P. McCarron                          Phone: +1 763 786-8160 x120
>>> Managing Director                            Fax: +1 763 786-8180
>>> ApTest Minnesota                            Inet:
>>> shane@aptest.com
>>> 
>>> 
>>> 
>> 
>> ----
>> Ivan Herman, W3C Semantic Web Activity Lead
>> Home: http://www.w3.org/People/Ivan/
>> mobile: +31-641044153
>> PGP Key: http://www.ivan-herman.net/pgpkey.html
>> FOAF: http://www.ivan-herman.net/foaf.rdf
>> 
>> 
>> 
>> 
>> 
> 
> -- 
> Shane P. McCarron                          Phone: +1 763 786-8160 x120
> Managing Director                            Fax: +1 763 786-8180
> ApTest Minnesota                            Inet: shane@aptest.com
> 
> 
> 


----
Ivan Herman, W3C Semantic Web Activity Lead
Home: http://www.w3.org/People/Ivan/
mobile: +31-641044153
PGP Key: http://www.ivan-herman.net/pgpkey.html
FOAF: http://www.ivan-herman.net/foaf.rdf

Received on Monday, 28 February 2011 16:11:05 UTC