Re: ldp-ISSUE-10 (Guidance around ETags): Include clarifications and guidance around ETags [Linked Data Platform core]


+1 to using weak etags, for the good reasons mentionned above. This is what
I resort to in the platform I am building.

Note however that, in strict HTTP 1.1, weak etags can not be used for
validation of PUT content. Section "14.24 If-match" of RFC2616 says

A server MUST use the strong comparison function (see section 13.3.3)
to compare the entity tags in If-Match.

This restriction no longer exists in HTTPbis, so I take that as an
ackowledgement that the restriction above was a misfeature. But I guess
everyone should be aware of that if the group decides to go that path
(which, again, I'm in favor of).

I also think we should be clear about the rationale of using weak etags.
For example, something like:

The server MAY provide a strong etag (ref), but only if it can guarantee
that the same graph will always be serialized the exact same way
(byte-wise). This is not always the case, as the order of triples or blank
node labels are not significant in RDF and may vary across serializations.
If the server can not ensure that, the etags it provides MUST be weak etags


On Mon, Feb 4, 2013 at 11:58 AM, Henry Story <>wrote:

> On 4 Feb 2013, at 11:53, Steve Battle <> wrote:
> >> -----Original Message-----
> >> From: Wilde, Erik []
> >
> >> On 2013-02-04 09:24 , "Raúl García Castro" <> wrote:
> >>> .- I think that using ETags should be a MUST, since it is the minimum
> >>> requirement for detecting conflicts in updates.
> > ...
> >>
> >>> .- I would keep things simple and not mention in the specification
> >>> things like using :weakEtag properties in resource descriptions.
> >>
> >> +1, let's keep HTTP concepts in HTTP.
> >
> > To be clear _here_  (yes - I did raise etags in resource descriptions in
> > another context), we're recommending using weak ETags, not in resource
> > descriptions, but in the response header.
> > Can we agree that the use of weak ETags with RDF content should at least
> > be a best practice recommendation?
> +1 for best practice.
> Also while we are at it, is there a good efficient algorithm for
> calculating this?
> ( I suppose just the hash for the hash of every triples )
> >
> > Steve.
> >
> Social Web Architect

Received on Monday, 4 February 2013 11:59:55 UTC