W3C home > Mailing lists > Public > www-tag@w3.org > March 2009

Re: Uniform access to metadata: XRD use case.

From: <Patrick.Stickler@nokia.com>
Date: Thu, 5 Mar 2009 10:47:25 +0100
To: <eran@hueniverse.com>, <jar@creativecommons.org>
CC: <julian.reschke@gmx.de>, <connolly@w3.org>, <www-tag@w3.org>
Message-ID: <C5D56C4D.E3BB%patrick.stickler@nokia.com>

On 2009-03-05 09:00, "ext Eran Hammer-Lahav" <eran@hueniverse.com> wrote:

> The simple fact that most developers I work with have never heard of HTTP
> OPTIONS is enough of an indication for me that asking them to use MGET is not
> being practical.

Are those developers planning on implementing semantic web solutions? If so,
then they should be interested in using the best methodologies for doing so,
and if that means using a new method, then whether they have heard of it
before or not is irrelevant.

And the degree to which web developers know of and understand OPTIONS is
probably proportionate to a combination of (a) how much they need such
functionality (b) how well it is discussed in learning materials, which is
in any case impacted by how broadly its needed, and (c) how well it is
supported in the tools they use.

There are many functions in many standards and tools which have limited
utility to a majority of developers and are relatively unknown by most, but
any developer worth his or her salt who needs to achieve a particular goal
will find out what the most efficient, standardized, reliable, and scalable
solution is, and if OPTIONS is the best way to achieve a particular goal,
and is reasonably documented, then they will find and use it.

If OPTIONS is not used, it may be because its not needed, or not well
supported, but that doesn't mean that *any* other new method will be equally

That said, as I've noted before, there do exist certain artificial and
unreasonable barriers in place in various tools and contexts to discourage
the use of novel HTTP methods, seemingly motivated by philosophical aggenda
rather than technical merit, and I wouldn't be surprised if "new" methods
such as OPTIONS, albeit blessed by the latest HTTP spec, face adoption and
usage issues due to those artificial barriers.

> Yes, this argument doesn't fly with many people on this list,
> and I'm fine with agreeing to disagree.

I think you are somehow presuming that solutions such as URIQA would and
should be relevant and interesting to all web developers. They will not be.
They will only be interesting and relevant to semantic web developers (who
also will likely be web developers). As such, the solution(s) adopted as
standards need not be understood and utilized by every web developer on
every web server, no more so than must WebDAV methods be understood and
utilized by every web developer on every web server.

Yes, ideally, we'd like to see every web server be a semantic web server,
and every web developer be a semantic web developer, but that will only
happen if and because there is a clear benefit and reasonable effort for
folks to adopt and employ semantic web technologies. To that end, solutions
which are efficient, consistent, easy to modularly add to existing
environments with minimal impact, and easy to understand and use are

URIQA is such a solution. The other proposals... well.. not so much.

> The idea behind my Link-based
> discovery proposal is that it offers three methods to accomplish the
> association of a descriptor to a resource.

As I've already pointed out, a multiplicity of possible ways to access
authoritative metadata is an unreasonable and unnecessary burden on
developers. The very point of having standards is to minimize the number of
ways a given application has to do things, to maximize interoperability and
minimize effort and cost.

Having many ways to accomplish a single task, even if the ways are similar,
is not better than having a single way to accomplish that task.

Having "three different methods to accomplish the association of a
descriptor to a resource" as a solution for "unified access to metadata"
should set off warning bells in any engineers head, that something is not
optimal, that the solution is not fully baked, and needs refinement, or
rejection in favor of an alternative, simpler and more consistent solution.

> ---
> The decision whether URIQA and the M* methods are a good approach isn't really
> just a matter of figuring out if they can be hacked into most common httpd
> servers.

Well, if you mean in the same way that WebDAV methods are "hacked" into web
servers, sure. 
> If the URIQA authors are serious about getting adoption as a standard, they
> should submit it for a standards process as well as work with an organization
> like Apache to get it included in standard distributions.

It would be great to have the time to do that, but one does not always have
the opportunity to promote full time what is believed to be the best
solution. Often times, the reality is that inferior solutions, promoted by
folks who actually do have the time to lobby and market, become the
standard. Cest la vie. I do what I can with the limited time and energy I
have after meeting my primary responsibilities in life. Not all of us have
the luxury of being full time evangelists.

I have already consumed far more time than I really have at the moment in
these recent discussions, and with every post, tell myself that I will stop
(but politeness and a sincere belief based on years of experience that URIQA
is significantly better than the other options proposed thus far, continue
to prod me to respond to further questions or misunderstandings).

Nokia has had no commercial interest in seeing URIQA adopted by others, and
in the present business climate, commercial interests tend to squeeze out
all other interests, whether we like it or not.

I was originally drawn into these recent discussions merely to correct some
misstatements about URIQA and while I do not have time to actively promote
URIQA, I also do not wish to see it misrepresented in public documents in a
manner which may discourage its fair and objective consideration by those
truly interested in the most optimal solution. I appreciate your willingness
to allow me to offer corrections to future versions of your document.

So if after this post I now become silent in these discussions, it is not
from conceding any point, but simply that my primary responsibilities have
forced me to stop investing further time in these discussions, despite my
own preference to continue, and with apologies for leaving any questions

Those who wish to objectively and fairly consider URIQA will be able to find
both sufficient information and open source implementations at
http://sw.nokia.com/uriqa/URIQA.html and likely will find other
implementations of URIQA elsewhere as well, if they take the trouble to

Time permitting, I will do my best to respond to private emails about
specific technical questions about URIQA or anything related.

> It seems to me that the IETF would be the appropriate venue given the fact
> this is an extension of the HTTP protocol. There are many real deployment
> issues with introducing a new HTTP method and real cost to the web. It is not
> my place to speculate on such ramifications because HTTP is not my area of
> expertise.

While the IETF may be an appropriate venue to promote URIQA (and I'm not
really sure that it is), it should be understood that URIQA is not
considered to be an extension to the core HTTP standard itself, no more so
than WebDAV is, even if it is a functional extension to the HTTP protocol.

And as I've tried to point out several times, with examples, the "real cost
to the web" for truly adding support for semantic web functionality to a web
environment is going to be comparable or greater than adding support for
URIQA. Being developed in a commercial context rather than an academic or
research context, complexity and cost of implementation were of paramount
importance when designing URIQA (even if there never has been nor is any
intention to productize the technology, it still costs us money to deploy
and maintain it).

> I would love to see Nokia submit URIQA as an I-D for review by the HTTPbis
> working group. I am sure they can offer some valuable insight. If this was
> done in the past, I would love if someone could point me to it.

Again, I don't see that as necessarily the correct path, since URIQA is not
intended to be an extension to the core HTTP standard, but rather a
distinctly modular compliment to it. As such, it imposes no requirements on
the core HTTP protocol and the standard(s) defining that core protocol.

Support for URIQA can be implemented in a given server environment with no
change whatsoever to existing web servers, or may be added natively to a
particular server implementation, or something in the middle. Thus, many web
developers and web server owners will simply not care about, nor should they
be forced to care about URIQA, no more so than should they be forced to care
about WebDAV (even though many will).

> Until URIQA is proposed as a standard

You seem to mean "proposed within a specific forum/context/organization".
URIQA certainly has been proposed as a standard, and for quite some time.

> (with some clarity regarding its
> licensing terms which I could not find other than a copyright statement at the
> bottom of the page),

URIQA has from its inception been offered freely to the semantic web
community as an optimal methdology for serving authoritative descriptions
about resources and its usage is royalty free, covered by the relevant
declarations made by Nokia to the W3C in conjunction with the proposal and
consideration of URIQA in the context of Nokia's participation in the RDF
Core and Data Access Working Groups.

If such declarations are not sufficient for you or others, I'm sure it is
straightforward to provide comparable and sufficiently equivalent
declarations in a more context-neutral form, should there be a need and
benefit for Nokia to do so.

> it would not be possible for me to treat it as anything
> but an interesting idea.

Well, if you wish to limit yourself to only properly considering options
within such narrow constraints, then that is certainly your choice. Your
final solutions, however, may end up falling shorter than the mark than they

> And while at it, why not propose my own methods?
> Maybe LGET to get just the Link headers...

You're free to propose anything you like.

> The empirical data I would like to see is where is URIQA deployed, what's the
> scale of the deployment, what applications are using it, etc.

We use it extensively within Forum Nokia and with great success, proven over
many years and many incarnations of our core infrastructure. I am aware of
it being used elsewhere, but have not actively tracked such usage. Our focus
has been on solving our own semantic web needs, while being happy to freely
share with others what we have found to work well, should it also prove
useful to anyone else.



>> -----Original Message-----
>> From: Jonathan Rees [mailto:jar@creativecommons.org]
>> Sent: Wednesday, March 04, 2009 9:55 PM
>> To: Eran Hammer-Lahav
>> Cc: Patrick.Stickler@nokia.com; julian.reschke@gmx.de; connolly@w3.org;
>> www-tag@w3.org
>> Subject: Re: Uniform access to metadata: XRD use case.
>> On Feb 24, 2009, at 9:00 AM, Eran Hammer-Lahav wrote:
>>> I'll separate the two for my next draft and correct this.
>>> Adding URIQA support in many hosted environments or large corporate
>>> deployment isn't simple. It sets a pretty steep threshold on
>>> adoption [1]. I actually like the MGET approach a lot, but I can't
>>> sell it to 90% of my use cases. Consider me an extreme pragmatists...
>>> EHL
>>> [1] http://www.hueniverse.com/hueniverse/2009/02/the-equal-access-
>> principal.html
>> I don't know about hosted environments and corporate deployments
>> generally, but one thing I like about Link: is that in Apache, at
>> least, it can be inserted using a directive in an .htaccess file.
>> http://httpd.apache.org/docs/2.0/mod/mod_headers.html
>> It looks as if the Apache 'script' directive could be used to enable
>> URIQA, but it requires installation of a CGI script (or something
>> similar), raising the bar a teeny bit (perhaps beyond
>> what's practical in certain deployments). (Not that .htaccess is
>> always permitted to use the header directive anyhow.)
>> http://httpd.apache.org/docs/2.0/mod/mod_actions.html
>> The problem is that I believe both Eran and Patrick, who say
>> conflicting things. We have talked a lot about technical merit and
>> generalities. Since the questions of practicality and simplicity are
>> empirical any hard data pro or con either side would be helpful,
>> especially as regards non-Apache platforms.
>> Jonathan
Received on Thursday, 5 March 2009 09:45:29 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 26 April 2012 12:48:13 GMT