[whatwg] Link rot is not dangerous

On Fri, May 15, 2009 at 1:32 PM, Manu Sporny <msporny at digitalbazaar.com> wrote:
> Tab Atkins Jr. wrote:
>> Reversed domains aren't *meant* to link to anything. ?They shouldn't
>> be parsed at all. ?They're a uniquifier so that multiple vocabularies
>> can use the same terms without clashing or ambiguity. ?The Microdata
>> proposal also allows normal urls, but they are similarly nothing more
>> than a uniquifier.
>>
>> CURIEs, at least theoretically, *rely* on the prefix lookup. ?After
>> all, how else can you tell that a given relation is really the same
>> as, say, foaf:name? ?If the domain isn't available, the data will be
>> parsed incorrectly. ?That's why link rot is an issue.
>
> Where in the CURIE spec does it state or imply that if a domain isn't
> available, that the resulting parsed data will be invalid?

Assume a page that uses both foaf and another vocab that subclasses
many foaf properties.  Given working lookups for both, the rdf parser
can determine that two entries with different properties are really
'the same', and hopefully act on that knowledge.

If the second vocab 404s, that information is lost.  The parser will
then treat any use of that second vocab completely separately from the
foaf, losing valuable semantic information.

(Please correct any misunderstandings I may be operating under; I'm
not sure how competent parsers currently are, and thus how much they'd
actually use a working subclassed relation.)

~TJ

Received on Friday, 15 May 2009 12:50:06 UTC