Re: [manifest] Add requirements when the Manifest is used by crawlers (#343)

> Also, we list no requirements for crawlers like folliwng robots.txt.

This seems outside the scope of this document. Robots.txt already defines general rules for spiders trying to access resources. 

However, I'll add a search engine and a browser as examples of user agents. 

If there are specific requirements for search engines beyond processing manifests in the manner specified in the spec, then I would like to hear them. 

---
Reply to this email directly or view it on GitHub:
https://github.com/w3c/manifest/issues/343#issuecomment-96792856

Received on Monday, 27 April 2015 19:42:40 UTC