Facebook Site Scraping TOS and better protocol for indexing

Facebook has published a new TOS [1] for site scraping.
You need now according to the item 2 of the TOS, a written permission for Automated Data Collection.

	"2. You will not engage in Automated Data Collection 
	without Facebook’s express written permission."

They also create a robots.txt [2] with some rules for some domains of the sites. I didn't evaluate which parts were really covered.

All in all it reminds me about the articles [3][4] I have written recently (with olivier thereaux) on how managing digital opacity and that the tools available today for the users are very poor. It is interesting to see that in this case, it's more to protect revenues than privacy or maybe both, but without user choices. I have the feeling that it is time to propose protocols and implementation to help people to have a better control on indexing. 


[1]: http://www.facebook.com/apps/site_scraping_tos_terms.php
[2]: http://facebook.com/robots.txt
[3]: http://www.w3.org/2008/09/msnws/papers/olivier-karl
[4]: http://www.la-grange.net/2010/05/28/karl-dubost-privacy-ws

-- 
Karl Dubost
Montréal, QC, Canada
http://www.la-grange.net/karl/

Received on Wednesday, 16 June 2010 01:33:45 UTC