- From: <noah_mendelsohn@us.ibm.com>
- Date: Mon, 3 Dec 2007 11:33:05 -0500
- To: www-tag@w3.org
On Dave Farber's Interesting People (IP) list I came across a posting [1] by Lauren Weinstein discussing a new system called ACAP [2]. I haven't grokked it in detail, but it seems to be a proposal from members of the publishing industry for a much more sophisticated approach to doing what robots.txt does, I.e. telling you what to crawl and what not to crawl on a given web site. From [2]: > Following a successful year-long pilot project, ACAP (Automated > Content Access Protocol) has been devised by publishers in > collaboration with search engines to revolutionise the creation, > dissemination, use, and protection of copyright-protected content on > the worldwide web. > > ACAP is set to become the universal permissions protocol on the > Internet, a totally open, non-proprietary standard through which > content owners can communicate permissions for access and use to > online intermediaries. Lauren speculates that: > Though ACAP is currently a voluntary standard, it might be assumed > that future attempts will be made to give it some force of law and > associated legal standing in court cases involving search engine use > and display of indexed materials. I have no idea whether that's true, but it seems to me that this is of potential interest to the TAG for at least two reasons: it's yet another form of site metadata [3], and if Lauren's speculations were to prove true, there might be implications for deep linking policy [4], etc. Noah [1] http://www.listbox.com/member/archive/247/2007/12/sort/time_rev/page/1/entry/1:3/20071203040958:77A08D88-A17F-11DC-B783-82021E0242B0/ [2] http://www.the-acap.org [3] http://www.w3.org/2001/tag/group/track/issues/36 [4] http://www.w3.org/2001/tag/group/track/issues/25 -------------------------------------- Noah Mendelsohn IBM Corporation One Rogers Street Cambridge, MA 02142 1-617-693-4036 --------------------------------------
Received on Monday, 3 December 2007 15:37:31 UTC