On Wed Apr 25 9:05 , David Dailey sent:
>Surveying the most popular sites (visits, links, duration of visits,
>familiar, ...) gives one view of HTML as it is practiced, by popular
>sites. It is natural to ask "are popular sites representative of the
>web as a whole?"
I believe so.
>There are at least two differences between popular and "other" that
>we might expect: 1. popular sites are probably less likely to engage
>in "adventurous" behavior (unless you are one of the companies
>represented on W3C HTML WG of course) -- that is, they are less
>likely to push frontiers and edges of use-cases. Too much is at stake
>to be very experimental 2. they are more likely to be coded well.
The first ranked most popular Alexa site, Yahoo!, adresses your points above.
W3C HTML Validation = 33 errors [http://validator.w3.org/check?uri=http%3A%2F%2Fwww.yahoo.com%2F]
W3C CSS Validation = 207 errors by direct input (or, none after URI entry [http://jigsaw.w3.org/css-validator/validator?uri=http%3A%2F%2Fwww.yahoo.com%2F&warning=1&profile=css21&usermedium=all]) Karl?
>Limiting an investigation to those sorts of sites, might tend to give
>a false sense of security about just how robust the standards are,
>vis a vis, how many sites might fail.
I would be very surprised if less than 80% of those sites did not fail some form of conformance checking. Sadly, I have never found that false sense of security from standards robustness.
>In addition to these popular sites, those working on the survey might
>also want to consider accumulating a collection of outlier cases as well.
How you you define _outliner_ cases?
>If anyone is interested, I've got some
>naughty fringe cases.