[ISSUE-36][ISSUE-25] Automated Content Access Protocol (ACAP)

On Dave Farber's Interesting People (IP) list I came across a posting [1] 
by Lauren Weinstein discussing a new system called ACAP [2].  I haven't 
grokked it in detail, but it seems to be a proposal from members of the 
publishing industry for a much more sophisticated approach to doing what 
robots.txt does, I.e. telling you what to crawl and what not to crawl on a 
given web site.  From [2]:

> Following a successful year-long pilot project, ACAP (Automated 
> Content Access Protocol) has been devised by publishers in 
> collaboration with search engines to revolutionise the creation, 
> dissemination, use, and protection of copyright-protected content on
> the worldwide web.
> ACAP is set to become the universal permissions protocol on the 
> Internet, a totally open, non-proprietary standard through which 
> content owners can communicate permissions for access and use to 
> online intermediaries.

 Lauren speculates that:

> Though ACAP is currently a voluntary standard, it might be assumed 
> that future attempts will be made to give it some force of law and 
> associated legal standing in court cases involving search engine use
> and display of indexed materials. 

I have no idea whether that's true, but it seems to me that this is of 
potential interest to the TAG for at least two reasons:  it's yet another 
form of site metadata [3], and if Lauren's speculations were to prove 
true, there might be implications for deep linking policy [4], etc.


[2] http://www.the-acap.org 
[3] http://www.w3.org/2001/tag/group/track/issues/36
[4] http://www.w3.org/2001/tag/group/track/issues/25

Noah Mendelsohn 
IBM Corporation
One Rogers Street
Cambridge, MA 02142

Received on Monday, 3 December 2007 15:37:31 UTC