Sure it's a known problem.

But building more systems on top of it just makes it a more wide-spread known problem.

That's when knowledge of the problem enters the customer domain, which causes support issues for proxy vendors.

Also RFC2616 mentions recommended behaviour of proxies that retrieve the whole entity, that they should send a 206 back to the client with just the part requested.

If they do this, then the client won't know the proxy is doing it, and if say you were downloading a 100MB file off 20 sites, you'd end up downloading the whole thing 20 times.  Including waiting each time for the whole entity to get the part, since the URIs would be different, even if the proxy cached it, it wouldn't know it was the same file coming from different sources.

In which case... maybe an extra header to make it explicit that this is what's going on so a proxy can act accordingly - with some sort of universally unique key for the file (e.g. some meta URI that's not the actual URI requested but which identifies the entity).  So that the proxy can know it's the same file being requested from multiple different locations, and just return the pieces from the cached version retrieved from the first request.

Adrien

Henrik Nordstrom wrote:
fre 2009-07-31 klockan 11:11 +1200 skrev Adrien de Croy:

  
So, any system proposed that is going to use Range requests, is going
to run into problems with proxies that perform AV functions.
    

Of course, but it's trivially detected by the first response being a 200
instead of the expected 206, before going out in parallel, so I don't
see it as a big problem for the proposed mechanism. In addition download
managers etc have to deal with this today already and it's by no means a
new problem.

Regards
Henrik

  

-- 
Adrien de Croy - WinGate Proxy Server - http://www.wingate.com