- From: Julian Reschke <julian.reschke@gmx.de>
- Date: Mon, 12 Jul 2010 14:58:51 +0200
On 12.07.2010 14:44, Mike Wilcox wrote: > On Jul 12, 2010, at 2:30 AM, Julian Reschke wrote: >> Google: >> <http://validator.w3.org/check?uri=http%3A%2F%2Fwww.google.com&charset=%28detect+automatically%29&doctype=Inline&group=0 >> <http://validator.w3.org/check?uri=http%3A%2F%2Fwww.google.com&charset=%28detect+automatically%29&doctype=Inline&group=0>> >> - 35 errors > > That's a little different. Google purposely uses unstandardized, > incorrect HTML in ways that still render in a browser in order to make > it more difficult for screen scrapers. They also "break it" in a > different way every week. How exactly is it different? Do you think that what Google does somehow is "better"? Just asking. As far as I can tell, it just shows that content providers continue to send whatever happens to work, thus are not concerned at all about validity (note: there's a permathread about this as well -- why disallow things that are known the work reliably...). Best regards, Julian
Received on Monday, 12 July 2010 05:58:51 UTC