Google opening the door to a discussion about AI opt-out

On Thursday, 26/10, the Google "The AI Web Publisher Control Development Team" has organized a first webinar (not a discussion, a presentation) "about developing machine-readable means to provide web publisher choice and control for emerging AI and research use cases." 
I listened to the webinar, and I hope some of you could participate too.  
This is the first time an AI Actor opens the door for discussion, and this is a big one. 

The team seems open to standardizing a method with a standards body - they are considering working with the IETF. 

During the call, they developed the different issues to be solved: alignment of the different existing options for blocking crawlers, transparency of the ownership and purpose of crawlers, the granularity of the access control, with the notion of a taxonomy of crawl purposes (ex. "search engines" "generative AI applications"), and how to incentivize the adoption of shared standards. 

In summary, these notions are crossing our current interrogations and it is time to discuss them also in this group. 

The Google team seems inclined to use an evolution of robots.txt for that. They seem ready to add lots of semantics to its current basic model. They didn't speak about robots tags, which should be added to the discussion.
Personally, I see no problem moving from our current implementation of this tdmrep.json file to the good old robots.txt IF the semantics of the latter evolve. 

The Google team is now releasing a questionnaire. I received a password for accessing it. Please consider joining this effort, from this blog post


https://blog.google/technology/ai/ai-web-publisher-controls-sign-up/
A principled approach to evolving choice and control for web content
blog.google

and form 
https://services.google.com/fb/forms/ai-web-publisher-controls-external/

Received on Sunday, 29 October 2023 19:06:08 UTC