- From: Henrik Nordstrom <henrik@henriknordstrom.net>
- Date: Sat, 15 Nov 2008 01:27:28 +0100
- To: Werner Baumann <werner.baumann@onlinehome.de>
- Cc: Yves Lafon <ylafon@w3.org>, ietf-http-wg@w3.org
- Message-Id: <1226708848.26491.52.camel@henriknordstrom.net>
On fre, 2008-11-14 at 23:10 +0100, Werner Baumann wrote: > "cases where the validator is use does not allow reliable identification > of changes". In current practise this means changes within the same > second and there is nothing that restricts the kind of changes that > might occur. I assume you by "current practise" mean the Apache ETag implementation? There this is only true in very specific conditions. Most sub-second changes do infact get properly reflected in the weak ETag. The server application need to do some quite special things to not get the change reflected in ETag or be very unlucky, unless ofcourse the ETag algorithm has manually been degraded in quality (may be needed for replicated cluster setups). Yes, there is some cases where the simple algorithm used by Apache will fail and emit the same weak ETag for two quite different objects, but in real life use those is quite rare. In fact I would argue that it's probably more likely the content gets updated while being sent, making even their strong ETags "worthless", and the same for any server on any OS where files may be updated while read by another application unless you buffer the whole selected representation to calculate the ETag. And no, it's not strictly compliant by Apache to return the same ETag when the content is completely different, but the Apache team has accepted this risk as the risk of this happening in any normal setup and use is very low. In nearly all cases the kind of changes which the ETag algorithm would miss is indeed of "weak semantic level". Regards Henrik
Received on Saturday, 15 November 2008 00:28:23 UTC