W3C home > Mailing lists > Public > public-semweb-lifesci@w3.org > August 2007

Re: feasibility; purl.org/commons

From: Eric Jain <Eric.Jain@isb-sib.ch>
Date: Mon, 27 Aug 2007 18:05:45 +0200
Message-ID: <46D2F659.9090008@isb-sib.ch>
To: samwald@gmx.at
CC: jar@creativecommons.org, public-semweb-lifesci@w3.org

samwald@gmx.at wrote:
> I would also prefer that the file extensions are preserved, but is it
> really such a big deal? Requiring the redirect service to fetch the ID
> from the middle of the URI definitely makes things more complicated, and
> it is mostly a matter of taste.

There are good reasons for using file extensions when serving non-HTML 
resources (prevents people from ending up with extension-less files after 
doing a save-as, a big source of confusion, based on my observations).

But in any case the issue for me is that several of the databases we 
reference have URLs where the identifier does not conveniently appear at 
the end (for various reasons). Can't well just drop those databases :-)


>> 2. The one-PURL-per-representation approach results in more URIs
>> floating around than I'd be willing to deal with
> 
> I still cannot see where the practical problems should be. "One URI for
> every file" seems much easier to deal with than content negotiation. But
> I guess this topic has been discussed way too long here already.

I think I don't need to explain why having HCLS provides URIs for concepts 
(e.g. HTTP URIs that result in 303 redirects, a la thing-described-by.org) 
that don't have any authoritative URIs (yet...) would be extremely useful.

I don't quite see the use of PURL URIs for specific representations (I'd 
just use their URLs directly, if I had anything to say about one of them), 
but I don't mind this feature -- as long as it doesn't get in the way...


>> 3. If you enter a URL in the browser, people (including non-technical
>>  people) have to get something useful (see e.g. DOI system), not
>> something that looks like an error page
> 
> This can be easily improved. Here is an RDFa-based draft for such a
> redirect page that I made out of personal interest (not in any way
> approved by Science Commons):
> 
> http://whatizit.neurocommons.org/template_303.htm

That's nice, but when I show such pages to our biologists, they still think 
it's some kind of error page, with all the gobbledygook about "commitment", 
"representation" and "URI"... Even if this page was made super easy to 
understand, users of our site would be terribly annoyed at being sent to 
such an intermediate page instead of to the page they expect to see (they 
often need to inspect a large number of links, so I can feel with them).


> The page contains the following RDF encoded in RDFa:
> 
> [...]
> 
> The embedded RDFa could be extended to tell the client something like
> "there is a XML version of this, and you can find it at URI...; there is
> a HTML version of this, and you can find it at URI...". That would be
> much more transparent and Semantic Web - oriented than content
> negotiation, right?

I agree that it might be useful to be able to GET such documents that list 
all the different representations, but please don't make this the default!

Note that you can't do these pages correctly, for example in UniProtKB the 
available formats differ depending on the state of the entry:

<http://beta.uniprot.org/uniprot/P05067.*>
<http://beta.uniprot.org/uniprot/P00001.*>
<http://beta.uniprot.org/uniprot/P00750.*?version=10>

This could still be a useful project, but I don't think it needs to be tied 
to the resolver?
Received on Monday, 27 August 2007 16:08:24 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 18:00:49 GMT