- From: Marcos Caceres <w3c@marcosc.com>
- Date: Wed, 16 Jul 2014 11:35:59 -0400
- To: ""Martin J. Dürst"" <duerst@it.aoyama.ac.jp>
- Cc: "spec-prod@w3.org Prod" <spec-prod@w3.org>
On Wednesday, July 16, 2014 at 7:21 AM, "Martin J. Dürst" wrote: > Hello Marcos, others, > > On 2014/07/16 00:36, Marcos Caceres wrote: > > > On July 14, 2014 at 11:36:08 PM, Sangwhan Moon (smoon@opera.com (mailto:smoon@opera.com)) wrote: > > > > It could also be a simple spec that is stable enough and does not > > > really need extra work. > > > > > > > > All specs need extra work. They are living documents. > > Okay. Let's just assume that's true. > > In that case, as a reader, I'd appreciate if I get an automatic update > of what I have in my browser whenever the spec changes. WHATWG HTML does this, btw :) > On the other hand, I don't really care at all to watch the ReSpec > production process in my browser, and to spend electricity and CO2 and > sweat (it's really damn hot here in Japan :-) for that. > > > > > Or just keep > > > on publishing static HTML. > > > > > > > > Am I the only person around here who thinks of the Web as a dynamic software platform? > > Of course not. The Web is a dynamic software platform. But a good Web > application uses that platform for something that benefits the user, not > as a purpose of its own. And if there are no benefits for the end user, > there is no need to heap up JavaScript. Sure. Of course - and I'll admit that today that specs as applications are fairly static and don't provide much benefit over their post-processed counterparts. > In other words, do you think that in order to make the Web a dynamic > software platform, we have to prohibit static content? Of course not. Some content is well suited to be static. > If yes, can you > send me the smallest piece of JavaScript that I could add to my many > static pages (e.g. lecture materials,...) so that I can continue to be > part of the Web :-? I don't have to. If you (or your students) seen things like Kahn academy and interactive online programming courses, then your static lecture pages will probably look a little boring (and likely won't provide an optimal pedagogical experience compared to dynamic interactive courses online). Of course, you can just put up static content - but you will be competing for attention with much more engaging content. > > Or is it 1996 still and no one told me? > > > It might actually be a good idea to go back to that timeframe (roughly). > At one point in time, Netscape proposed that all styling of documents be > done through JavaScript. Fortunately, others invented CSS. Would you > argue that we should throw away CSS because otherwise the Web is not a > dynamic platform? Apart from the fact that CSS is dynamic (and has a JS counterpart, the CSS OM) - CSS is becoming increasingly imperative. Furthermore, there has been a large move by the developer community to adopt things like SASS and LESS because of the limitations of CSS with this regard. > > Seriously tho. I don't know how we are supposed to be defining the next advances of the platform if people around here keep thinking about specs like they are paper. > > > I can print out an on-the-fly ReSpec Spec, so paper seems to be an > orthogonal concern. My point was that we should be thinking to better take advantage of the medium. To think outside the box a little. The HTML spec does some of this well: like allowing people to select text and file a bug directly from the spec. The ability to receive realtime updates if the spec changes (I leave the HTML spec open in my browser for days at a time). Other specs I work on bring in the open bugs from github when the document loads, so the reader knows up to the minute about what bugs we are working on and how they can help, etc. There is a lot of potential for us to make better use of the medium if we think of interesting things that we could add to make our lives, and the lives of our readers, better. Having the view that specs are static/dated/paper-like things constrains our potential to innovate. > > > Putting aside all the comments above, static HTML documents > > > are more spider friendly. > > > > > > > > This statement is false [1] and grossly out of date. Spiders that just crawl text are not crawling the web. If any of them are just crawling text, then they are going to be screwed with Web Components or with most modern web development techniques. > > > > [1] https://twitter.com/mattcutts/status/131425949597179904 > > That says: Googlebot keeps getting smarter. Now has the ability to > execute AJAX/JS to index some dynamic comments http://goo.gl/F9et1. > And if one follows the link, it looks like Google is mostly after > Facebook comments. I was pointing out that they started doing this as early as 2011. Better link [1], from May 23, 2014 - which states: "In order to solve this problem, we decided to try to understand pages by executing JavaScript. It’s hard to do that at the scale of the current web, but we decided that it’s worth it. We have been gradually improving how we do this for some time. In the past few months, our indexing system has been rendering a substantial number of web pages more like an average user’s browser with JavaScript turned on. " http://googlewebmastercentral.blogspot.ca/2014/05/understanding-web-pages-better.html
Received on Wednesday, 16 July 2014 15:36:33 UTC