Re: Facebook's Open Graph Protocol

On Wed, Apr 28, 2010 at 9:06 AM, Alexander Korth <alex@ttbc.de> wrote:

> Hi all,
>
> The most important aspect here is that they do the marketing and the
> implementation ease to make sites owners annotate their sites. The choice of
> the vocabulary is secondary.
> What was the problem again of site owners annotating web sites with meta
> tags years ago? It was too hard to do, the benefit was too blurry and it was
> not well marketed.
> What do you think?
>

One of the most widely used arguments was "nobody will trust embedded
metadata because people will lie and cheat to boost their rankings", but
this was all focussed on keyword search, where it was matching of meta tag
keywords against use searches that mattered. And indeed meta tags were
stuffed with inaccurate metadata by SEOlogists and greedy webmasters. This
fueled the desire especially in the microformats world for an emphasis on
metadata derrived from user-facing real content, rather than hidden stuff in
the <head/>. This makes a lot of sense in the general case: if you're
prepared to show it to your users, you're less likely to get away with
publishing metadata tags for an eg. real estate site using meta keywords =
"britney spears speares britny britnie pics". When the metadata could be
hidden away in the header, Web publishers were demonstrably less honest. The
"people lie" angle was also the flagship argument in rants like
http://www.well.com/~doctorow/metacrap.htm

What's different this time? I think the recommender aspect plus the sheer
bulk of independent appraisal that Facebook's userbase brings. Perhaps if
delicious.com or digg or reddit or other link sharing sites had proposed
such metadata, it would've got some traction too. But with the Facebook
deployment, you get not only a massive userbase but also a well-oiled flow,
so that when one person clicks 'like', it's showing up on the screens of
dozens of friends and contacts within seconds. And since some of them will
look around and decide whether or not they emote=like it too, this is a good
source of mass independent evaluations. Not of the metadata directly, but of
the underlying site and/or the thing it describes.

Since the discovery mechanism is social rather than keyword match, the
incentives change: publishers are rewarded for putting the page in the right
category (since movies will show up as such in aggregators); but aren't
particularly rewarded for adding bogus keywords or misleading descriptions,
since users will quickly see if they've been mislead. If I click thru a
"John liked Britney Pics" to discover it's really a dull real estate site,
(a) I won't click 'like' (b) how long will it be before FB and others add a
"[confused/scared/annoyed? report this!]" trail when metadata is misleading?
In other words, the microformats emphasis on metadata that is used/visible
remains relevant, although it is the recommender UI that brings the data
into use rather than simple embedding in HTML <body> markup.

cheers,

Dan

Received on Wednesday, 28 April 2010 07:39:00 UTC