- From: Liam McGee <liam.mcgee@communis.co.uk>
- Date: Thu, 29 Mar 2007 12:15:20 +0100
- To: EOWG <w3c-wai-eo@w3.org>
Hi all. If one had access to, like Google does, an index of 100 billion pages of the web as a data set, what questions would you like to ask it? E.g. I would like to find out how many pages contained H1 tags and how that varies over time (and indeed how many contain h1 and h2, how many contain h1 h2 and h3) as a proxy for semantic structure. Taking the semantic temperature of the web, as it were. I'd also want to know how many pages had a 'skip to...' link in them, and again how that changes over time. Any other ideas? (and while I'm thinking along these lines, it'd be great to get usage figures on the w3c validator services over time. Would make for a good story: 'more and more designers getting valid') Regards to you all Liam www.communis.co.uk
Received on Thursday, 29 March 2007 11:15:45 UTC