Re: ReSpec toolchain...

Hello Marcos, others,

On 2014/07/16 00:36, Marcos Caceres wrote:

> On July 14, 2014 at 11:36:08 PM, Sangwhan Moon (smoon@opera.com) wrote:

>> It could also be a simple spec that is stable enough and does not
>> really need extra work.
>
> All specs need extra work. They are living documents.

Okay. Let's just assume that's true.

In that case, as a reader, I'd appreciate if I get an automatic update 
of what I have in my browser whenever the spec changes.

On the other hand, I don't really care at all to watch the ReSpec 
production process in my browser, and to spend electricity and CO2 and 
sweat (it's really damn hot here in Japan :-) for that.


>> Or just keep
>> on publishing static HTML.
>
> Am I the only person around here who thinks of the Web as a dynamic software platform?

Of course not. The Web is a dynamic software platform. But a good Web 
application uses that platform for something that benefits the user, not 
as a purpose of its own. And if there are no benefits for the end user, 
there is no need to heap up JavaScript.

In other words, do you think that in order to make the Web a dynamic 
software platform, we have to prohibit static content? If yes, can you 
send me the smallest piece of JavaScript that I could add to my many 
static pages (e.g. lecture materials,...) so that I can continue to be 
part of the Web :-?


> Or is it 1996 still and no one told me?

It might actually be a good idea to go back to that timeframe (roughly). 
At one point in time, Netscape proposed that all styling of documents be 
done through JavaScript. Fortunately, others invented CSS. Would you 
argue that we should throw away CSS because otherwise the Web is not a 
dynamic platform?


> Seriously tho. I don't know how we are supposed to be defining the next advances of the platform if people around here keep thinking about specs like they are paper.

I can print out an on-the-fly ReSpec Spec, so paper seems to be an 
orthogonal concern.


>> Putting aside all the comments above, static HTML documents
>> are more spider friendly.
>
> This statement is false [1] and grossly out of date. Spiders that just crawl text are not crawling the web. If any of them are just crawling text, then they are going to be screwed with Web Components or with most modern web development techniques.
>
> [1] https://twitter.com/mattcutts/status/131425949597179904

That says: Googlebot keeps getting smarter. Now has the ability to 
execute AJAX/JS to index some dynamic comments http://goo.gl/F9et1.
And if one follows the link, it looks like Google is mostly after 
Facebook comments.

Regards,   Martin.

Received on Wednesday, 16 July 2014 11:22:01 UTC