W3C home > Mailing lists > Public > public-rdf-dawg-comments@w3.org > January 2006

Re: URI serialization issues

From: Kendall Clark <kendall@monkeyfist.com>
Date: Wed, 18 Jan 2006 09:39:41 -0500
Message-Id: <593CA7F2-D643-4952-8D80-63D4DD60829C@monkeyfist.com>
Cc: dawg comments <public-rdf-dawg-comments@w3.org>, Pat Hayes <phayes@ihmc.us>
To: Mark Baker <distobj@acm.org>


On Jan 18, 2006, at 12:18 AM, Mark Baker wrote:

>
> Hey,
>
> On 1/17/06, Kendall Clark <kendall@monkeyfist.com> wrote:
>> 1. GET being 'safe' is always talked about it terms of side effects
>> like deleting a resource -- the DOS issue is different.
>
> That's the common use, yes, but the reality of "safe" is a bit more
> complicated.  Consider what Roy says here;

Eh, sorry, I'll consider HTTP spec and webarch, but I don't subscribe  
to the Fielding hadith.

> http://lists.w3.org/Archives/Public/www-tag/2002Apr/0207
>
> In particular he notes "money, BTW, is considered property for the
> sake of this definition", and each fulfilled request for data likely
> costs most publishers some amount of money, some more than others.
> But it's up to each publisher to decide which data gets a URI, since
> for each one published, a promise is made that the publisher will
> absorb those charges.

Even if this is worth considering (not clear that it is, IMO),  
publishers and service providers understand this, I think.

>> 2. How does the query coming in via POST instead of GET really help
>> anything? In both cases a service may send QueryRequestRefused in
>> response to a request that's too expensive to complete. (I'm cc'ing
>> Pat Hayes because he +1'd yr comments and I wanna make sure I know
>> what he thinks of this.)
>
> It would help things in the sense that there wouldn't be a URI around
> that spiders, pre-fetchers, and other automata might "bombard" with
> requests without understanding the cost to the publisher.

I'm skeptical of this, FWIW. That there will be these costly URIs  
laying around and that they'll be bombarded by spiders.

>   POST
> doesn't work like that because it entails the client (potentially)
> incurring a obligation ... which is why spiders don't use it (that and
> the fact that they don't know *what* to POST).

Sure.

> Right.  There's no hard line here that can be described because each
> publisher will be willing to bare different costs.  All that you can
> do is describe the tradeoffs.

I'm just not convinced that belongs in a spec with SHOULD language.

>> So if there's a request that times out over GET, and I had some way
>> (how?) to tell the client to submit again via POST, what would be the
>> point of that? It will still timeout via POST and I've only wasted
>> the client's time and the service's.
>
> Presumably you'd have a greater timeout with POST, or even none at
> all, because of the differences between POST and GET as described
> above.

I'm also skeptical of that.

At any rate, Mark, I'll ask the WG about yr comments.

Cheers,
Kendall
Received on Wednesday, 18 January 2006 14:39:47 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 8 January 2008 14:14:50 GMT