Re: ReSpec toolchain...

I very much doubt that search engines are not taking the many many seconds
to run respec on a page so that they can extract the content.  Regardless,
there is no need to place that kind of burden on the network or our
constituents.  Recommendations are NOT living standards.  Recommendations
are stable documents and must remain so.  If you don't believe it, go ask
Tim.  I am sure he can explain it.

Moreover, without a static version things like Googlebot will not be able
to get at the lovely RDFa that makes our specifications that much more
parseable.


On Tue, Jul 15, 2014 at 10:36 AM, Marcos Caceres <w3c@marcosc.com> wrote:

>
>
> On July 14, 2014 at 11:36:08 PM, Sangwhan Moon (smoon@opera.com) wrote:
> > >
> > That's a bold statement.
>
> You know I don't mess around :)
>
> > It could also be a simple spec that is stable enough and does not
> > really need extra work.
>
> All specs need extra work. They are living documents.
>
> > I don't actually
> > mind having to "compile" the spec every time I load it (although
> > the delay is quite annoying) but the attitude
> > that specs can break because they all point to the same respec
> > version doesn't sound right. While I haven't
> > seen any spec breakages so far, at some point some respec change
> > will definitely break some old frozen
> > spec if we all hot link to one live version.
>
> And then we just fix it. If the spec matters, then someone will notice and
> it will be fixed. If the spec breaks and no one notices (because the spec
> is not used or is obsolete), who cares.
>
> A spec's worth must be measured by how often it is accessed and maintained
> - not by how frozen it is or by some stupid label like "W3C
> Recommendation".
>
> > If we are to suggest groups to just publish the source document,
> > there should be milestone/version copies for
> > finalized specs to refer to, so they can't break.
>
> Why? If thinks break, you fix them.
>
> > Or just keep
> > on publishing static HTML.
>
> Am I the only person around here who thinks of the Web as a dynamic
> software platform? Or is it 1996 still and no one told me?
>
> Seriously tho. I don't know how we are supposed to be defining the next
> advances of the platform if people around here keep thinking about specs
> like they are paper.
>
> > Putting aside all the comments above, static HTML documents
> > are more spider friendly.
>
> This statement is false [1] and grossly out of date. Spiders that just
> crawl text are not crawling the web. If any of them are just crawling text,
> then they are going to be screwed with Web Components or with most modern
> web development techniques.
>
> [1] https://twitter.com/mattcutts/status/131425949597179904
>
>
>
>
>
>

Received on Tuesday, 15 July 2014 15:46:23 UTC