W3C home > Mailing lists > Public > www-tag@w3.org > March 2009

Re: Q

From: <Patrick.Stickler@nokia.com>
Date: Mon, 2 Mar 2009 00:20:41 +0100
To: <eran@hueniverse.com>, <julian.reschke@gmx.de>
CC: <jar@creativecommons.org>, <connolly@w3.org>, <www-tag@w3.org>
Message-ID: <C5D0E4E9.E1E7%patrick.stickler@nokia.com>

Fair enough. I would ask, at least, that you either correct the erroneous
statements about the shortcomings of URIQA in your draft, or remove
reference to URIQA entirely from your draft.

Patrick



On 2009-03-02 01:11, "ext Eran Hammer-Lahav" <eran@hueniverse.com> wrote:

> I would really love to dispute almost every statement in this reply, but this
> can go on forever so I'm simply going to thank you for taking the time to
> answer, and let you have the last word, noting clearly that we disagree on
> this topic.
>
> :-)
>
> EHL
>
>> -----Original Message-----
>> From: Patrick.Stickler@nokia.com [mailto:Patrick.Stickler@nokia.com]
>> Sent: Sunday, March 01, 2009 3:07 PM
>> To: Eran Hammer-Lahav; julian.reschke@gmx.de
>> Cc: jar@creativecommons.org; connolly@w3.org; www-tag@w3.org
>> Subject: Q
>>
>>
>>
>>
>> On 2009-03-01 22:04, "ext Eran Hammer-Lahav" <eran@hueniverse.com>
>> wrote:
>>
>>> I don't need a convincing argument. You do.
>>
>> Well, I really am not interested in yet another long drawn out debate
>> of the
>> same points which have been put forth in the past, since I really don't
>> have
>> the bandwidth. I was originally motivated to offer some comments
>> (mostly
>> corrections) because it seemed from your draft that you (and other
>> "newcomers") had not understood URIQA very well, and I offer yet
>> additional
>> corrections below.
>>
>> I do not, however, intend to consume any more of my own bandwidth, or
>> that
>> of the others on this distribution, as it seems, as before, to simply
>> be a
>> suboptimal use of my time and energy.
>>
>> I will simply continue solving the specific technical challenges that
>> are my
>> prime responsibility in the best manner possible, and if anyone else
>> benefits from anything I do, great. If not, cest la vie.
>>
>>>
>>> HTTP 1.1 is widely deployed in web servers, proxies, caches, and
>> clients.
>>> URIQA is not. The cost of getting the entire web to support a new
>> HTTP method
>>> is huge, especially for a read-access oriented method like MGET which
>> must be
>>> cacheable and accessible (natively) from the most common web platform
>> (that
>>> is, JS, Flash, PHP, etc.).
>>
>> There are many costs to successfully deploying semantic web solutions,
>> and
>> most are not tied to server enhancements. It would be en error to focus
>> too
>> narrowly on just the scope of costs associated with supporting some
>> additional server functionality, and disregarding the costs and
>> complexity
>> of creating, managing, and accessing formal metadata in the most
>> modular,
>> efficient, consistent, and scalable manner.
>>
>> And your assertion that "the entire web" would need to support such
>> additional server functionality is invalid. Only those servers which
>> the
>> owners wish to support serving of formal descriptions to semantic web
>> agents
>> would need to be enhanced, and with the exception of <link> elements,
>> the
>> other linking methods will require as much or more server modification
>> and
>> enhancement to support a solution than adding a modular self contained
>> URIQA
>> solution to the environment.
>>
>> And BTW, URIQA also alleviates the need for an explicit "host metadata"
>> solution since one can simply execute an MGET on the server root URI to
>> get
>> such "host metadata" -- Occam's Razor would support the single solution
>> of
>> URIQA to numerous linking methods and special "host metadata" files.
>>
>>>
>>> As I said before, I like the concept of MGET very much. But I think
>> it fails
>>> certain requirements such as:
>>>
>>> 1. The ability to assign URI's to metadata. MGET doesn't help me when
>> I want
>>> to express that B describes A. While B is usually used in conjunction
>> with A,
>>> it is a discrete resource.
>>
>>
>> Sorry, but that's not correct, and if you look at what is returned by
>> sw.nokia.com, you'll see that every description has a distinct URI
>> denoting
>> it (separate from the request URI).
>>
>> It's not mandated by the URIQA spec to provide such a distinct URI, but
>> is
>> recommended.
>>
>>> Producing a representation of a descriptor when
>>> that descriptor doesn't have its own URI seems like a pretty bad
>> violation of
>>> web architecture.
>>
>> I agree, but as I've noted, this is not a shortcoming of URIQA.
>>
>>>
>>> 2. It fails at multiple levels of meta. If C describes B and B
>> describes A,
>>> using MGET, all I have is a URI for A... I have no way of obtaining
>> C.
>>
>> Again, incorrect. Presuming that [A] is the URI denoting A:
>>
>> MGET [A] -> description of A (i.e. B), where the response is denoted by
>> [B]
>> MGET [B] -> description of B (i.e. C), where the response is denoted by
>> [C]
>> MGET [C] -> description of C, etc.
>>
>> Easy (and simple, and efficient, and consistent).
>>
>>>
>>> 3. I strongly disagree it complies with the Equal Access Principle
>> [1].
>>
>> Well, with all due respect, given the number of misunderstandings you
>> have
>> clearly had about URIQA, it's hard for me to accept that your
>> conclusions on
>> any particular point are valid (they may be, but since I've not seen
>> sufficient comments from you indicating you actually understand how
>> URIQA
>> works and what it offers, I'm unable to give you the benefit of the
>> doubt).
>>
>>> In a
>>> previous email you listed all the issues in deploying URIQA and the
>>> workarounds and hacks needed to get it to work. I am unwilling and
>> unable to
>>> go on a crusade to get URIQA adopted so that the community I serve
>> will be
>>> able to use it.
>>
>> I can appreciate that position (though I think it is overstated).
>>
>> I think it depends on where you want to ultimately place the burden.
>>
>> URIQA simplifies things for creating, managing, and especially
>> accessing
>> formal metadata.
>>
>> Linking (has the illusion that it) simplifies things for the server
>> admins/owners (but folks will eventually find out just how much they
>> will
>> need to do to get a critical mass of adoption and use, and in the end,
>> things would be a lot easier and cheaper with an approach such as
>> URIQA).
>>
>> Ultimately, the semantic web will succeed or fail based on (a) the ease
>> with
>> which novel applications can be created and (b) the volume of useful
>> metadata.
>>
>> Whatever solution(s) become standardized (either defacto or otherwise)
>> will
>> need to effectively address those points.
>>
>>>
>>> If you read my full proposal for Link-based Resource Descriptor
>> Discovery [2],
>>
>> I have.
>>
>>> you'd know that none of the 3 methods proposed offer a complete
>> solution.
>>
>> If by "the 3 methods" you refer to the different methods of using
>> linking to
>> associate descriptions with resources, then yes, I certainly agree that
>> they
>> do not offer a complete solution (not even combined).
>>
>>> That's why I have 3. Criticizing Links by picking on a single form of
>> link
>>> (header, element, host-meta patterns) is pointless because the first
>> thing I
>>> said in my draft is that neither one is complete.
>>
>> Well, I didn't pick on any one specifically. I think most of my
>> comments
>> apply to all three.
>>
>> And I never stated that linking was not a useful technique for
>> associating
>> descriptions with resources (and in fact, I explicitly stated the
>> opposite).
>> Rather my concern is that such techniques are not simple nor optimal
>> enough
>> for a sufficiently broad range of semantic web agents to serve as the
>> primary standardized way that semantic web agents ask web authorites
>> about
>> resources denoted by URIs grounded in those domains. I've detailed
>> several
>> use cases which give rise to these concerns, so I won't repeat them.
>>
>> And I've pointed out those problemmatic use cases many times before,
>> and
>> proponents of alternative solutions to URIQA never step up and address
>> them,
>> so I must conclude none are able to offer a reasonable account.
>>
>>>
>>> I studies URIQA carefully when I performed my analysis and it failed
>> my
>>> requirements. So far I have not heard anything new to pursued me
>> otherwise.
>>
>> Well, in all fairness, it doesn't appear you studied it well enough,
>> since
>> you seem to have gotten most of the key points wrong and therefore your
>> conclusions are based on misunderstanding. I also would have been, and
>> am,
>> most willing to answer any questions you might have had, or still have,
>> about URIQA, if you truly are seeking to study the problem and all
>> reasonable solutions objectively.
>>
>> Regards,
>>
>> Patrick
>>
>>>
>>> EHL
>>>
>>> [1]
>>> http://www.hueniverse.com/hueniverse/2009/02/the-equal-access-
>> principal.html
>>> [2] http://tools.ietf.org/html/draft-hammer-discovery
>>>
>>>
>>>
>>>
>>>> -----Original Message-----
>>>> From: Patrick.Stickler@nokia.com [mailto:Patrick.Stickler@nokia.com]
>>>> Sent: Tuesday, February 24, 2009 9:24 PM
>>>> To: Eran Hammer-Lahav; julian.reschke@gmx.de
>>>> Cc: jar@creativecommons.org; connolly@w3.org; www-tag@w3.org
>>>> Subject: Re: Uniform access to metadata: XRD use case.
>>>>
>>>>
>>>>
>>>>
>>>> On 2009-02-24 19:00, "ext Eran Hammer-Lahav" <eran@hueniverse.com>
>>>> wrote:
>>>>
>>>>> I'll separate the two for my next draft and correct this.
>>>>>
>>>>> Adding URIQA support in many hosted environments or large corporate
>>>> deployment
>>>>> isn't simple. It sets a pretty steep threshold on adoption [1].
>>>>
>>>> I've seen such comments before, but have never seen a convincing
>>>> argument.
>>>>
>>>> If you are going to be doing "semantic web stuff" and publishing
>>>> metadata
>>>> about resources, then you are going to have to do something more
>> than
>>>> just
>>>> your plain out-of-the-box web server solution, both for serving the
>>>> metadata
>>>> and for managing/authoring the metadata.
>>>>
>>>> A "plug-in" solution like URIQA, which can be integrated into any
>> web
>>>> server
>>>> either by a method redirection proxy or by having the server pass
>>>> unsupported method requests to it, is trivially easy to add.
>>>>
>>>> After all, how hard is it to e.g. add WebDAV to a web site? In most
>>>> cases,
>>>> pretty trivial. No difference for an approach such as URIQA.
>>>>
>>>>>  I actually
>>>>> like the MGET approach a lot, but I can't sell it to 90% of my use
>>>> cases.
>>>>> Consider me an extreme pragmatists...
>>>>>
>>>>> EHL
>>>>>
>>>>> [1]
>>>>> http://www.hueniverse.com/hueniverse/2009/02/the-equal-access-
>>>> principal.html
>>>>
>>>> Well, I read it, but I don't see how URIQA conflicts with your
>> "equal
>>>> access
>>>> principle", in fact, it seems to be quite in tune with it.
>>>>
>>>> Patrick
>>>>
>>>>
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Patrick.Stickler@nokia.com
>> [mailto:Patrick.Stickler@nokia.com]
>>>>>> Sent: Tuesday, February 24, 2009 8:48 AM
>>>>>> To: Eran Hammer-Lahav; julian.reschke@gmx.de
>>>>>> Cc: jar@creativecommons.org; connolly@w3.org; www-tag@w3.org
>>>>>> Subject: Re: Uniform access to metadata: XRD use case.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 2009-02-24 18:18, "ext Eran Hammer-Lahav" <eran@hueniverse.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Both of which are included in my analysis [1] for the discovery
>>>>>> proposal.
>>>>>>
>>>>>> A few notes:
>>>>>>
>>>>>> The statement "Minimum roundtrips to retrieve the resource
>>>> descriptor:
>>>>>> 2" is
>>>>>> not correct for URIQA.  Only one is needed.
>>>>>>
>>>>>> URIQA also supports self declaration. The descriptor returned can
>> of
>>>>>> course
>>>>>> include statements about the descriptor itself, though typically
>> the
>>>>>> descriptor would be a CBD by default, which would not. Still, no
>>>> reason
>>>>>> why
>>>>>> it couldn't.
>>>>>>
>>>>>> Not sure why you would consider "Scale and Technology Agnostic" a
>>>>>> negative,
>>>>>> since in real practice, if you have a server that is going to
>> offer
>>>>>> authoritative metadata, you have to enhance the server in some
>>>> manner
>>>>>> (e.g.
>>>>>> to insert links, etc.) so being able to modularly add a component
>>>> which
>>>>>> doesn't intrude upon the existing core web server functionality,
>> but
>>>>>> can
>>>>>> operate in an auxilliary fashion, satisfying requests for metadata
>>>> in a
>>>>>> manner not intrinsically tied to how representations are served,
>> is
>>>> a
>>>>>> plus
>>>>>> in my book. And solutions such as link forces content publishers
>> to
>>>>>> mint
>>>>>> extra URIs to identify the descriptors explicitly, when usually,
>>>>>> clients
>>>>>> don't care about the identity of the descriptor, they just want
>> the
>>>>>> metadata. So again, "technology agnostic" = "modular" in my book,
>>>> and
>>>>>> that's
>>>>>> always a plus.
>>>>>>
>>>>>> Perhaps you should split URIQA from PROPFIND since your summary of
>>>>>> PROPFIND
>>>>>> does not correctly capture its properties, and suggests URIQA is
>>>>>> essentially
>>>>>> equivalent, which it clearly is not.
>>>>>>
>>>>>> Cheers,
>>>>>>
>>>>>> Patrick
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> EHL
>>>>>>>
>>>>>>> [1] http://tools.ietf.org/html/draft-hammer-discovery-
>> 02#appendix-
>>>> B.2
>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: Julian Reschke [mailto:julian.reschke@gmx.de]
>>>>>>>> Sent: Tuesday, February 24, 2009 1:45 AM
>>>>>>>> To: Patrick.Stickler@nokia.com
>>>>>>>> Cc: Eran Hammer-Lahav; jar@creativecommons.org; connolly@w3.org;
>>>>>> www-
>>>>>>>> tag@w3.org
>>>>>>>> Subject: Re: Uniform access to metadata: XRD use case.
>>>>>>>>
>>>>>>>> Patrick.Stickler@nokia.com wrote:
>>>>>>>>> ...
>>>>>>>>> Agents which want to deal with authoritative metadata use
>>>>>>>> MGET/MPUT/etc.
>>>>>>>>> ...
>>>>>>>>
>>>>>>>> Same with PROPFIND and PROPPATCH, btw.
>>>>>>>>
>>>>>>>> BR, Julian
>>>>>
>>>
>
Received on Sunday, 1 March 2009 23:18:31 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 26 April 2012 12:48:13 GMT