W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2007

Re: Standardizing Firefox's Implementation of Link Fingerprints

From: Travis Snoozy <ai2097@users.sourceforge.net>
Date: Tue, 3 Jul 2007 07:31:56 -0700
To: "Mark Baker" <distobj@acm.org>
Cc: "Roy T. Fielding" <fielding@gbiv.com>, "Edward Lee" <edilee@mozilla.com>, ietf-http-wg@w3.org
Message-ID: <20070703073156.68932189@localhost>

On Tue, 3 Jul 2007 09:32:52 -0400, "Mark Baker" <distobj@acm.org> wrote:

> 
> On 7/2/07, Roy T. Fielding <fielding@gbiv.com> wrote:
> >
> > On Jul 2, 2007, at 4:21 PM, Edward Lee wrote:
> > > For Firefox 3, there are patches [1] that implement Link
> > > Fingerprints, which provide automatic resource verification for
> > > URIs that look like http://site.com/file#hash(sha256:abc123) so
> > > that link providers can be sure that end users download the exact
> > > file that the provider intended (and not a trojaned download).
> >
> > Identifiers should not be abused in this way.  Adding metadata to a
> > URI that is orthogonal to its identifying purpose duplicates the
> > space of references and splits the power of the resulting
> > resources.  The same task can be accomplished better by specifying
> > the hash in an attribute of the link/anchor instead, and deploying
> > that is far less likely to confuse existing clients.
> 
> Exactly my thoughts.  It might look like this;
> 
> <a href="http://site.com/file" hash="sha256:abc123">the file</a>

You're missing the point, insofar as the URL no longer has the
requisite data -- the link can only be made via HTTP, so that (e.g.) I
couldn't right-click copy-link and paste it into another application.

That said, URLs are already _way_ to long within their regular problem
space. We wouldn't need services like tinyurl if that weren't the case
-- making links even longer is counterproductive if there is any goal
to keep the darn things human readable.

As to why it's bad to try and add a feature to the URL syntax in this
way -- see the metalink discussion that rattled through the list back
in April[1]. Now, imagine that syntax, plus the hashing syntax. How
would you make them work together? Could they? Would *you* want to look
at a link like that?

Really, what it comes down to, is we need a full file format for richly
describing links. And, in fact, we have such a format -- RDF. The
entire point of RDF is to say "Hey, see this resource? Here's a whole
bunch of information about it." The point of the URL, on the other
hand, is to provide a handle to get a hold of something -- preferably in
such a way that the URL has some semi-discernible human-readable
meaning to it -- or, barring that, is at least _short_.
"http://example.com/2007/07/03/What-I-Had-For-Breakfast/"
is much nicer than
"http://example.com/19863"
is much nicer than
"http://example.com/1875,253,098676/ba2fcc201de92c8b.html?arb09="<...>

Yes, if metadata is put into a file, it's not part of the URL anymore.
However, URLs are just bad, bad, bad ways to shuffle _lots_ of data
around. Ideally, URLs that correlate to a document with complicated
extra-URL information would actually point at a metadata file. This is
how, e.g., ASX works: you get a link to the file, then your media
player opens that file, sees a few more links and follows them. The
process can repeat, and metadata is thrown in all along the way. It's a
little extra work for humans who want to chase down the "last hop" link,
but it's still all human-readable.

-- 
Travis

[1]http://lists.w3.org/Archives/Public/ietf-http-wg/2007AprJun/0042.html
Received on Tuesday, 3 July 2007 14:32:11 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:50:15 GMT