W3C home > Mailing lists > Public > www-tag@w3.org > February 2011

Re: HashInURI

From: ashok malhotra <ashok.malhotra@oracle.com>
Date: Sat, 12 Feb 2011 14:14:45 -0800
Message-ID: <4D570655.1090007@oracle.com>
To: nathan@webr3.org
CC: "Eric J. Bowman" <eric@bisonsystems.net>, Yves Lafon <ylafon@w3.org>, Karl Dubost <karld@opera.com>, "www-tag@w3.org List" <www-tag@w3.org>
Many thanks to all who have contributed to this thread.

What I'm struggling with is why?  Why did twitter change it's pattern
from twitter.com/timbray <http://twitter.com/timbray> to twitter.com/#!/timbray  ? <http://twitter.com/#%21/timbray>

Ben Ward http://blog.benward.me/post/3231388630 suggests
"The reasons sites are using client-side routing is for performance: At the simplest, it's about not reloaded an entire page when you only need to reload a small piece of content within it. Twitter in particular is loading lots, and lots of small pieces of content, changing views rapidly and continuously. Twitter users navigate between posts, pulling extra content related to each Tweet, user profiles, searches and so forth. Routing on the client allows all of these small requests to happen without wiping out the UI, all whilst simultaneously pulling in new content to the main Twitter timeline."

I agree that client-side routing is faster and more responsive but is that what twitter is doing?
Tim Bray http://www.tbray.org/ongoing/When/201x/2011/02/09/Hash-Blecch
says that the initial request to twitter just fetches a bunch of Javascript which then executes and
fetches the pieces it needs to render #!myname.  This requires two fetches rather than the single
fetch required for twitter.com/myname.

But perhaps the pieces that are fetched include other information such as the profile for myname which
can be accessed by client-side navigation.

The second reason that http://isolani.co.uk/blog/javascript/BreakingTheWebWithHashBangs suggests is that once twitter decided to change from a website to an application and given that it wanted
to be indexable by Google it needed to use the #! pattern,  It says:
"sites using fancy technology like Ajax to bring in content found themselves not well listed or ranked for relevant keywords because Googlebot couldn't find their content they'd hidden behind JavaScript calls"
Thus, Google devised the #! pattern as an (ugly) solution to this problem.  See
http://code.google.com/web/ajaxcrawling/docs/specification.html

Comments appreciated!
<http://twitter.com/#%21/timbray>
All the best, Ashok
Received on Saturday, 12 February 2011 22:17:32 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 26 April 2012 12:48:30 GMT