W3C home > Mailing lists > Public > www-tag@w3.org > February 2011

Re: HashInURI

From: Nathan <nathan@webr3.org>
Date: Fri, 11 Feb 2011 18:02:12 +0000
Message-ID: <4D5579A4.2030209@webr3.org>
To: "Eric J. Bowman" <eric@bisonsystems.net>
CC: Yves Lafon <ylafon@w3.org>, Karl Dubost <karld@opera.com>, Ashok Malhotra <ashok.malhotra@oracle.com>, "www-tag@w3.org List" <www-tag@w3.org>
Eric J. Bowman wrote:
> Nathan wrote:
>> In reality everything is a different resource? well without getting
>> in to the semantics of this, because that statement is provably
>> false, I'll take it at face value to mean that everything which the
>> application provides a view of (or things which comprise a composite
>> view), in which case I think you'll find every "resource" is defined
>> by it's own unique URI, unless of course it's only available as part
>> of a collection.
> Here are two different resources:
> http://twitter.com/#!/webr3
> http://twitter.com/#!/ericbow


> We know they're different resources because URIs are opaque, and these


> strings aren't character-for-character identical.  This is reality --

No you don't know they're two different /resources/ based on that, a 
single resource can have multiple names (URIs).

In reality they do of course refer to two different things (and the 
conceptual mapping of what they refer is consistent), but web arch and 
the specs currently don't cover this use case, if you go by the specs 
you'll GET / from twitter.com via HTTP, receive text/html, and find no 
element with an @id of either "!/webr3" or "!/ericbow"

> each user's feed is a separate resource, is it not?

Yes, and each user's feed is identified by it's own URI, and neither of 
the URIs you mention above refer to a users feed.

> OTOH, since # has special meaning, any HTTP client will treat those as
> being the same resource, i.e. dereferencing either one will dereference
> http://twitter.com/ .  This makes http://twitter.com/ the *one* resource
> being dereferenced for *everything*.

One would hope that an HTTP client noticed that http://twitter.com/ and 
http://twitter.com/ were the same URI, and in both cases 
http://twitter.com/ (when dereferenced successfully) does refer to the 
same thing, the start state of a web application / interactive document 
described in text/html, the semantics of the "conceptual mapping" are 

Note that http://twitter.com/ is only /one of the/ resources being 
dereferenced (not the one), HTTP is the transfer protocol being used, 
multiple resources will be dereferenced, each one through a stateless 
client server interaction over the network (possibly, depends if caching 
comes in to play) via a uniform interface, the HTTP client is nothing 
but a connector in all of this. The job of HTTP in all of this is to act 
as a transfer protocol, not to constrain what can be referred to by a 
URI, and not to understand why it's being used.

>>> In this case, the semantics of the mapping of http://twitter.com/
>>> varies based on the nature of the fragment, which can only be
>>> described as architecturally broken.
>> it maps to an application, are you telling me now that an application 
>> must not do anything?
> I'm telling you that architecturally, that's backwards -- what we have
> here, instead of being Code-on-Demand, is Content-on-Demand.  The
> former sends the content, along with instructions on how it is to be
> rendered.  The latter sends the rendering instructions, along with some
> instructions on how to find the content.  These are not the same style.

Sorry Eric, "content-on-demand" is something you just made up, HTML has 
full support for scripting / javascript, application state is on the 
client side (check), application state driven by hypermedia (check). One 
could argue endlessly over whether javascript in HTML is via the "Code 
on Demand" optional constraint of REST, or whether it's part of the 
hypermedia type; I see it as being part of HTML (as does the 
specification of HTML), some may see it as coming from the "Code on 
Demand" constraint of REST (but I see this more as java applets, flash, 
things which require the use of a different engine to be download and 
used), and it appears you're arguing that it's neither of the above.

Regardless, web applications are a /huge/ part of the web, have been for 
a long time, are here to stay. People can either embrace the web of 
applications and the web of data in addition to the good old fashioned 
web of documents, or they can fight against independent evolution and 
innovation in order to constrain the web. I know what my preference is.

> These two URIs can also map to the same application, without any naming
> collision:
> http://twitter.com/webr3
> http://twitter.com/ericbow

There is no naming collision (the hashbang URIs are different, you said 
that yourself), the above are two different URIs which /do not/ map to 
applications, let alone the same application.

> HTTP clients don't see those as the same resource, because there's no
> fragment.  The semantics of the mapping remain static, and are not a
> function of fragment contents.  They can even be handled by different
> server processes, unlike with #! where they wind up being handled by the
> same server process.

HTTP Client is orthogonal here, separation of concerns, it's merely a 
client connector for a transfer protocol (as covered above).
The semantics of the mapping for http://twitter.com/ remain consistent 
(as covered above).
"Server processes" is completely orthogonal and nothing to do with this 
at all, you're talking about implementation details behind the uniform 
interface (which the uniform interface hides!), heck in twitters case 
(and almost every case) server processes are little more than temporary 
processes on a machines, every request can hit a different short lived 
process, and often on completely different machines - it's so far off 
topic I'm not even sure why I'm covering it.
#! handled by the same process? see above that's totally wrong which 
ever way you look at it, but the main failing here is thinking that a 
GET on http://twitter.com somehow has bearing on what 
http://twitter.com/#!/webr3 refers to, and failing to take in to 
account.. well everything in this reply, I'm not going to repeat it all 

> But, since those last two "proper" URIs are what get fetched anyway,
> what's the point of imposing a round-trip to get there?  (That's a
> rhetorical question -- obviously, such chicanery isn't needed when
> proper architecture is followed from the get-go).

"proper" URIs and a proper architecture eh - back to the good old days 
when twitter kept their application, and application state on the 
server, when countless amounts of data were resent on every GET 
needlessly, when twitter couldn't scale and went down repeatedly, and 
when latency was at an all time high, right, ok, let's encourage that.

In all honesty, I'm not going to carry on the fighting progress negative 
conversations for long, innovation and evolution have been going on for 
ages, and I want to help and spur the process along and indeed adopt the 
new techs and approaches myself, not needlessly try and say "it's bad" 
when it's clearly not. From a few mails back:

Ultimately there's nothing wrong with JS applications or using
HashInURI, it should be encouraged, the problems are:

   - failing to publish data properly in a visible / web friendly manner
   - trying to reference data by using a URI that doesn't refer to data

That's what I'll be focussing on.


Received on Friday, 11 February 2011 18:04:27 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:56:37 UTC