W3C home > Mailing lists > Public > www-tag@w3.org > February 2011

Re: HashInURI

From: Nathan <nathan@webr3.org>
Date: Sat, 12 Feb 2011 23:19:04 +0000
Message-ID: <4D571568.3020500@webr3.org>
To: ashok.malhotra@oracle.com
CC: "Eric J. Bowman" <eric@bisonsystems.net>, Yves Lafon <ylafon@w3.org>, Karl Dubost <karld@opera.com>, "www-tag@w3.org List" <www-tag@w3.org>
ashok malhotra wrote:
> Many thanks to all who have contributed to this thread.
> What I'm struggling with is why?  Why did twitter change it's pattern
> from twitter.com/timbray <http://twitter.com/timbray> to 
> twitter.com/#!/timbray  ? <http://twitter.com/#%21/timbray>

In twitters words:

They didn't change the slash uri to the frag uri, they exposed a web 
application at twitter.com/.

The presence of the # is to provide URIs which refer to recomposable 
views of certain sets of data in the application, a requirement of 
most/all web applications.

The #! rather than just # was to partially address the lack of 
visibility in their API with a quick hack provided by google, because 
their "API" is neither RESTful nor open; had they adopted some form of 
structured data representation with semantics at the data tier then #! 
wouldn't be "needed".

In short, twitters implementation and architecture has many faults, most 
of which are in the API, the #! and the breakage of URI opacity is a non 
web friendly hack to account for these faults.

Some people are seeing this as an indication (or further proof!) that 
"web applications" using javascript, xhr, and fragment identifiers are 
bad for the web; whilst others, like myself, suggest that the fault is 
at the data tier / API level (which leads to people trying to suck data 
out from an application via a reference to a view! also a mistake).

> Ben Ward http://blog.benward.me/post/3231388630 suggests
> "The reasons sites are using client-side routing is for performance: At 
> the simplest, it's about not reloaded an entire page when you only need 
> to reload a small piece of content within it. Twitter in particular is 
> loading lots, and lots of small pieces of content, changing views 
> rapidly and continuously. Twitter users navigate between posts, pulling 
> extra content related to each Tweet, user profiles, searches and so 
> forth. Routing on the client allows all of these small requests to 
> happen without wiping out the UI, all whilst simultaneously pulling in 
> new content to the main Twitter timeline."
> I agree that client-side routing is faster and more responsive but is 
> that what twitter is doing?

Yes, it's the main driver what they're doing, but that only has bearing 
on the use of # in the application, not #!.

> Tim Bray http://www.tbray.org/ongoing/When/201x/2011/02/09/Hash-Blecch
> says that the initial request to twitter just fetches a bunch of 
> Javascript which then executes and
> fetches the pieces it needs to render #!myname.  This requires two 
> fetches rather than the single
> fetch required for twitter.com/myname.
> But perhaps the pieces that are fetched include other information such 
> as the profile for myname which
> can be accessed by client-side navigation.

yes, almost every page of the web requires multiple GETs to be fully 
rendered to a user (sites like mashable and techcrunch are over 100), 
what twitter has done is refactor things so that the next view provided 
by a link click requires only minimal GETs to raw data, GETting the bare 
minimum required to change the view state in the browser, and without 
reloading all the resources and information needed to compose the entire 
view, which on the whole is a lot friendlier for the network, servers, 
clients and users.

> The second reason that 
> http://isolani.co.uk/blog/javascript/BreakingTheWebWithHashBangs 
> suggests is that once twitter decided to change from a website to an 
> application and given that it wanted
> to be indexable by Google it needed to use the #! pattern,  It says:
> "sites using fancy technology like Ajax to bring in content found 
> themselves not well listed or ranked for relevant keywords because 
> Googlebot couldn't find their content they'd hidden behind JavaScript 
> calls"
> Thus, Google devised the #! pattern as an (ugly) solution to this 
> problem.  See
> http://code.google.com/web/ajaxcrawling/docs/specification.html

yes, I think we can all agree that #! and googles advice on "ajax 
crawling" is bad advice, a hack at best, but as mentioned above it's 
only to address short-comings at the data tier in the common "web 2.0" 
ajax applications, they've adopted the web of applications, but not the 
web of data, when they do that visibility of data will be present and 
these hacks will no longer be needed.

Again, and finally, the problem here is that people are looking at the 
usage of # to give some indication that "something" must be wrong with 
ajax / web applications, rather than looking further up the tiers to see 
that it's the APIs and data tier which is designed incorrectly (HTTP 
interface is good, RPC + dumb data and non REST = bad).

If the data was published using linked data principals, or as HTML, or 
XML+XSLT, or there existed some form of schema for JSON which described 
properties in a machine readable way and another which was essentially 
XSLT for JSON, then these issues would be solved, and #! would not be 


Received on Saturday, 12 February 2011 23:20:27 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:56:37 UTC