Re: [w3c/manifest] Describe how developers can block web crawlers (#498)

Personally, I would be in favor of adding this to the Manifest. My use case
would be as a former web hosting Company employee comma people had access
to upload files, but not necessarily modify their robots.txt. this happened
with resellers and subdomains. You could easily imagine code sites like
CodePen, jsbin, or even just old fashioned college websites offering folks
the ability to add files on an HTTPS domain, without giving them access to
robots.txt. while I agree that the robots is the way to go for a vast
majority of the internet, I would really like to have this escape hatch in
the Manifest

On Thu, Sep 8, 2016, 5:52 PM Marcos Cáceres <notifications@github.com>
wrote:

> I'm respectfully not in favor of adding this to the spec, primarily
> because robots.txt is well-known and doesn't add normative text the spec.
>
> I'm wondering, however, if we should actually start an explainer.md for
> web manifest, where this kind of information can go. That would give us
> freedom to provide guidance to developers, while keeping the spec lean of
> non-normative text.
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/w3c/manifest/issues/498#issuecomment-245787695>, or mute
> the thread
> <https://github.com/notifications/unsubscribe-auth/AAcaBpjGvSmRyG-4nxFzkancWJK_tKYwks5qoK3bgaJpZM4J4lxu>
> .
>


-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/w3c/manifest/issues/498#issuecomment-245789342

Received on Friday, 9 September 2016 01:05:21 UTC