- From: Brad Kemper <brad.kemper@gmail.com>
- Date: Mon, 30 Jun 2014 23:39:52 -0700
- To: Boris Zbarsky <bzbarsky@MIT.EDU>
- Cc: "www-style@w3.org" <www-style@w3.org>
> On Jun 30, 2014, at 12:27 PM, Boris Zbarsky <bzbarsky@MIT.EDU> wrote: > > No, you didn't get my point. My point is that if you write a word on your website and your browser shows it as a word but Google's spider doesn't think it's a word, you will be unhappy. I think if Google's spider is that broken, then Google should fix it. It does Google no good to avoid recognizing a word because it has an unintentional control character in the middle of it. > Your website's users will similarly be unhappy when they try and copy/paste the word into their word processor or mail client, and so forth. The browser already changes copied content as a result of text-transform. I don't see why it wouldn't leave out from copying characters that are known to be mistakes. > That is to say, there is a tension here between browsers fixing up broken sites for their users and web sites playing nice with the larger text-processing ecosystem that exists in the world, OK > and it's possible to actually make things worse for users and authors by covering up issues that would completely break other tools they rely on. I think it is possible to discard mistakes without breaking other tools.
Received on Tuesday, 1 July 2014 06:40:20 UTC