W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2007

RE: Standardizing Firefox's Implementation of Link Fingerprints

From: Larry Masinter <LMM@acm.org>
Date: Tue, 3 Jul 2007 01:15:39 -0700
To: "'Edward Lee'" <edilee@mozilla.com>
Cc: <ietf-http-wg@w3.org>
Message-ID: <000301c7bd4a$583004c0$08900e40$@org>

The most obvious complaint is that you are usurping the
fragment identifier syntax of URIs for a purpose other than
what was intended for it.  I believe many applications of URIs
will strip the fragment identifier off at a relatively high level
in the application stack. Conforming fragment requires determining
the MIME type and interpreting the fragment identifier according
to the MIME type of the result. http://site.com/file#hash is
defined to determine the meaning of 'hash' by the MIME type
of the result of accessing http://site.com/file .

You could, instead, define a new URI scheme, e.g.,

hashcheck:sha256:abc123:http://site.com/file

hashcheck:<hashscheme>:<hashvalue>:originalURI

which says 'retrieve data from originalURI but reject it if it doesn't
hash to <hashvalue>. This would be an extension with its own
clear meaning, rather than trying to redirect something that is
already defined.

Even then, you might have attacks which replace the
content-type or other headers in a result. Are you only
hashing the content body or the entire HTTP result message?
It sounds harder to create a Trojan just by supplying a different
header, but there might be some circumstances where that would
be the case.

I'm not clear what the threat is that this mechanism is
blocking. If someone wants to insert a Trojan, how is it
they can do that but not also modify the hash to match
the malicious content?
Received on Tuesday, 3 July 2007 08:15:46 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:50:15 GMT