W3C home > Mailing lists > Public > www-tag@w3.org > February 2009

Re: Server and client burden for URIQA vs. Link:

From: Richard Cyganiak <richard@cyganiak.de>
Date: Fri, 27 Feb 2009 18:41:25 +0000
Cc: <jar@creativecommons.org>, <www-tag@w3.org>
Message-Id: <0194F5E0-B787-4629-8367-D64EE890DB82@cyganiak.de>
To: <Patrick.Stickler@nokia.com>

On 27 Feb 2009, at 16:20, <Patrick.Stickler@nokia.com> wrote:
> Sorry Richard. For some reason, this particular message got trapped  
> by my
> spam filter. No clue why.
>
> Our server sw.nokia.com was undergoing some maintenance upgrades and  
> not all
> of the nodes in the farm were fully configured.
>
> It should be working fine now.

It does. Nice! Here are two different ways of querying the server with  
off-the-shelf Unix command line tools:

richard@cygri:~$ telnet sw.nokia.com 80
MGET /MARS-3 HTTP/1.0
Host: sw.nokia.com

richard@cygri:~$ curl -X MGET http://sw.nokia.com/MARS-3


I get the following response. I like how the server gives me a normal  
GETable URI for the description via Content-Location:


HTTP/1.1 200 OK
Cache-Control: no-cache
Connection: Close
Content-Location: http://sw.nokia.com/uriqa?uri=http%3a%2f%2fsw%2enokia%2ecom%2fMARS%2d3
Content-Type: application/rdf+xml; charset=UTF-8
Date: Fri, 27 Feb 2009 18:17:29 GMT
Set-Cookie: S_ID=B16BCD05F74E44500D95FE2C; path=/
URIQA-authority: http://sw.nokia.com/uriqa
Server: rdfgateway/3.000 SI
Content-Length: 3976

<?xml version="1.0" encoding="utf-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
          xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
          xmlns:owl="http://www.w3.org/2002/07/owl#"
          xmlns:dc="http://purl.org/dc/elements/1.1/"
          xmlns:dcterms="http://purl.org/dc/terms/"
          xmlns:rss="http://purl.org/rss/1.0/"
          xmlns:syn="http://purl.org/rss/1.0/modules/syndication/"
          xmlns:voc="http://sw.nokia.com/VOC-1/"
          xmlns:web="http://sw.nokia.com/WebArch-1/"
          xmlns:sw="http://sw.nokia.com/SWArch-1/"
          xmlns:uriqa="http://sw.nokia.com/URIQA-1/"
          xmlns:mars="http://sw.nokia.com/MARS-3/"
          xmlns:nc="http://sw.nokia.com/NC-1/"
          xmlns:dp="http://sw.nokia.com/DP-1/"
          xmlns:fn="http://sw.nokia.com/FN-1/">
<rdf:Description rdf:about="http://sw.nokia.com/MARS-3">
       <voc:term rdf:resource="http://sw.nokia.com/MARS-3/Actor"/>
[...snip...]


In practical terms, from a client's point of view, I don't see much  
difference between

    curl -X MGET

and

    curl -H "Accept: application/rdf+xml"

and I would assume that both are about equally easy or hard to do from  
the client side. On the server side it might be a somewhat different  
story I believe, most web development frameworks are quite opiniated  
about request methods.

So much for the technical side. On the architectural side, I have to  
say that I see the appeal of URIQA in scenarios such as metadata for  
media files (images, videos etc) where it's really hard or awkward to  
add links to metadata or embed the metadata into the file. But the web  
is held together by HTML documents (and, in a hypothetical future I  
sort of hope for, RDF documents), and it is easy to embed metadata  
directly into them. Even access paths to media files usually lead  
through some HTML page that can provide the metadata about the target  
image or video, in prose or embedded microformats or RDFa.

So, proposed addition to the (nicely thorough) FAQ section at [1]:

Why not embed the metadata in the document described by metadata? Or  
in the document where the agent found the URIQA-enabled URI?

Best,
Richard



>
>
> Cheers,
>
> Patrick
>
>
>
>
> On 2009-02-26 12:41, "ext Richard Cyganiak" <richard@cyganiak.de>  
> wrote:
>
>> Patrick,
>>
>> I want to play. Is there a demo server somewhere on the Web that
>> responds to MGET requests?
>>
>> I googled and found some URIs at sw.nokia.com that smelled like they
>> should be URIQA-enabled, but some fooling around with curl -X and
>> telnet didn't produce any useful responses. Hard to tell if it's just
>> me being dense.
>>
>> (Let me tell you that URIQA might be more successful if its  
>> proponents
>> were producing more demos and running code examples and less words!)
>>
>> Cheers,
>> Richard
>>
>>
>> On 26 Feb 2009, at 04:54, <Patrick.Stickler@nokia.com>
>> <Patrick.Stickler@nokia.com
>>> wrote:
>>
>>>
>>>
>>>
>>> On 2009-02-25 20:11, "ext Jonathan Rees" <jar@creativecommons.org>
>>> wrote:
>>>
>>>
>>> Hi Jonathan,
>>>
>>>
>>>> (spun off from  Subject:        Re: Uniform access to metadata: XRD
>>>> use case.)
>>>>
>>>> On Feb 25, 2009, at 11:01 AM, <Patrick.Stickler@nokia.com>
>>>> <Patrick.Stickler@nokia.com
>>>>> wrote:
>>>>
>>>>> The arguments that a linking approach imposes less  
>>>>> implementational
>>>>> burden
>>>>> or disruption to web sites or content publishers than approaches
>>>>> such as
>>>>> URIQA do not bear scrutiny.
>>>>
>>>> The server burden is an empirical question and I'd love it if  
>>>> someone
>>>> did some research, since MGET is superior in many ways. I take your
>>>> word for it that the Apache configuration required for MGET is as
>>>> easy
>>>> as for Link:, but I have no idea how the comparison would go on  
>>>> other
>>>> platforms.
>>>
>>>> From what I've seen to date (and admittedly, I haven't looked at  
>>>> any
>>> particular web server/platform for awhile, being busy with other
>>> things)
>>> adding URIQA support to a web server specifically depends on whether
>>> the
>>> implementation reflects a philosophy which is open or closed to
>>> specialized
>>> methods. Apache actually isn't particularly open in that regard. One
>>> must
>>> hook into an unsupported method error and insert URIQA method
>>> handling as
>>> "error resolution". It works. But it's not the most elegant.
>>>
>>> Modules such as WebDAV need special accommodation in the core, and
>>> thus, the
>>> platform as a whole is not open to specialized methods, either newly
>>> standardized or merely experimental. Whether this closed design is  
>>> by
>>> oversight or reflects a distinct philisophical position regarding
>>> non-standard methods is not clear. But it is a deficiency  
>>> nonetheless.
>>>
>>> I've found that in most cases, simply using a proxy which can
>>> redirect URIQA
>>> requests to a URIQA service is the easiest and cleanest approach --
>>> and has
>>> further merit in that one's semantic web service implementation can
>>> remain
>>> modularly separate from one's web service implementation, allowing
>>> one to
>>> change either with little to no impact on the other.
>>>
>>> Similar criticisms about closed design can be levied against the
>>> standard
>>> Java APIs. Use of a non-blessed method throws an exception *before*
>>> the
>>> method is set to the HTTP query rather than after, supposedly to
>>> "protect"
>>> software developers from accidently specifying a nonstandard method,
>>> but
>>> since that exception could just as well be thrown after setting the
>>> method,
>>> thus allowing those who know what they are doing to catch and
>>> disregard the
>>> exception while still accomplishing the stated goal (and such a
>>> change,
>>> requireing that the exception simply be moved 4 lines down in the
>>> code, has
>>> been requested, and dismissed) it reflects a particularly rigid
>>> philosophical (or political) stance regarding non-standard methods
>>> and does
>>> not reflect openness.
>>>
>>> It is for this reason that I mention philosophical and political
>>> resistance
>>> to approaches such as URIQA (or WebDAV for that matter) by
>>> deliberately
>>> hobbling tools and platforms in a manner which limits or entirely
>>> excludes
>>> experimentation and innovation, such that alternative solutions can
>>> not be
>>> easily tested and evaluated based on their demonstrable merits.
>>>
>>> (fortunately, there are workarounds)
>>>
>>>>
>>>> It's not just a server issue of course; we have firewalls, proxies,
>>>> caches, and filtering software to deal with, and applications that
>>>> like to use simple client utilities such as wget (although I admit
>>>> doing HEADs with some of these tools can be a challenge as well).
>>>> What
>>>> is your experience with URIQA in these situations?
>>>
>>> We've never encountered any problem with any of the above. Such
>>> phantoms
>>> keep cropping up, and I've repeatedly invited anyone to detail why
>>> they feel
>>> that such problems might exist, to offer clear use cases or  
>>> (ideally)
>>> concrete examples, but have never noted any response of any  
>>> substance.
>>>
>>> My current view (which I'm happy to change in the face of solid
>>> evidence) is
>>> that such issues equate to either hypotheticals by folks who are  
>>> doing
>>> "armchair web architecture", or even worse, "fear mongering" by
>>> those with
>>> vested interests in the alternatives to URIQA and related
>>> approaches. I
>>> accept that that is a strong statement. Those who would take offense
>>> and
>>> disagree are welcome to respond with solid evidence that such
>>> problems do,
>>> or clearly can, arise and (importantly) that such problems are
>>> significant
>>> to semantic web applications.
>>>
>>> (and please note that the above is my personal opinion, and not that
>>> of my
>>> employer or necessarily of any of my collegues)
>>>
>>> I'm not "religious" about URIQA, or about any particular solution in
>>> general. Honestly, I'm not. I'm very pragmatic and am continually
>>> willing to
>>> change my views and methodologies if and when there is clearly a
>>> better way,
>>> which takes into consideration the real-world issues relating to (in
>>> this
>>> case) metadata authorship and management, scalability, flexibility,
>>> and long
>>> term viability and maintenance of real-world solutions.
>>>
>>>>
>>>> I hope you and Eran get a chance to duke it out.
>>>
>>> I'm happy to review and respond to clear use cases, real-world
>>> examples,
>>> fair questions, and worthy arguments on this topic. But I don't have
>>> the
>>> time to rehash old arguments which have already been shown to be
>>> defective.
>>> I would encourage Eran, or anyone, to search for old threads and
>>> study those
>>> before engaging me in any new discussion, as I don't have either the
>>> time or
>>> inclination to merely echo what I've said more than frequently
>>> enough in the
>>> past, and unfortunately, in the past day or so.
>>>
>>> Cheers,
>>>
>>> Patrick
>>>
>>>
>>>>
>>>> Jonathan
>>>>
>>>
>>>
>>
>
>
Received on Friday, 27 February 2009 18:42:12 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 26 April 2012 12:48:12 GMT