W3C home > Mailing lists > Public > uri@w3.org > August 2011

Re: Fwd: Re: Document fragment vocabulary

From: Erik Wilde <dret@berkeley.edu>
Date: Tue, 30 Aug 2011 13:46:10 -0700
Message-ID: <4E5D4C12.7080206@berkeley.edu>
To: Sebastian Hellmann <hellmann@informatik.uni-leipzig.de>, uri@w3.org
CC: Michael Hausenblas <michael.hausenblas@deri.org>

On 2011-08-29 06:43 , Sebastian Hellmann wrote:
> It seems difficult to find a good definition what the URIs used in the
> RDF actually denote. It might also not be possible to make this coherent
> with Fragment ID Semantics as defined by W3C. What do you think?

i don't think there are "fragment identifier semantics". there is the 
URI syntax, and then things get ugly because the semantics depend on the 
media type of the media type of the resource representation that you 
might GET. you could in theory create a framework that would "map" 
fragment identifiers based on the result of a retrieval, but that sounds 
awfully brittle, in particular when resources can change.

> Then the URI scheme is poorly choosing unless you edit the page either
> backwards or fix all mistakes one at a time.
> But what would be the best URI scheme for this Use Case ?

you mean "fragment identifier" here. HTML only has id-based 
identification, so you cannot identify words. it sounds like the 
application you want to build has to do its own thing. at some point in 
time i suggested to the HTML5 group to improve on HTML's fragment's 
identification capabilities by at least allowing child paths (something 
like #1/2/1/3/12, counting child element nodes down the tree), but they 
had/have too many other things to do. i still think that HTML5 would be 
a great opportunity to make HTML a better hypertext citizen, but HTML5 
mainly focuses on web apps and scripting and not so much on hypertext.

> I can understand your use case of annotating log files, but I guess it
> would be nice to be able to annotate Wikipedia pages.

i see your point, but the big difference is that log files are plain 
text, and wikipedia pages are not.

> This is what I would benchmark as it might produce a best practice in
> Web Annotation.

it would be great to have that, but then you probably want to focus on 
the web's main content type, HTML.

> As I said, I would also try to benchmark the CSV URIs if you have a CSV
> corpus that I could use.

i don't have that, and even if you had such a corpus, you would also 
need a change model and a model of how to deal with breaking changes and 
how useful/appropriate it is to "fix" them.



erik wilde | mailto:dret@berkeley.edu  -  tel:+1-510-2061079 |
            | UC Berkeley  -  School of Information (ISchool) |
            | http://dret.net/netdret http://twitter.com/dret |
Received on Tuesday, 30 August 2011 21:14:17 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:25:15 UTC