W3C home > Mailing lists > Public > www-tag@w3.org > February 2011

Re: HashInURI

From: Eric J. Bowman <eric@bisonsystems.net>
Date: Sat, 12 Feb 2011 20:54:55 -0700
To: nathan@webr3.org
Cc: Yves Lafon <ylafon@w3.org>, Karl Dubost <karld@opera.com>, Ashok Malhotra <ashok.malhotra@oracle.com>, "www-tag@w3.org List" <www-tag@w3.org>
Message-Id: <20110212205455.dc02ff10.eric@bisonsystems.net>
Nathan wrote:
> > Here are two different resources:
> > 
> > http://twitter.com/#!/webr3
> > http://twitter.com/#!/ericbow
> s/resource/URIs


> > We know they're different resources because URIs are opaque, and
> > these
> s/resource/URIs

No, unless two URIs are character-for-character identical, they're two
different resources.  Or, as Julian noted, in this case they're two
different secondary resources, of the same primary resource.

> No you don't know they're two different /resources/ based on that, a 
> single resource can have multiple names (URIs).

Not really; "More precisely, a resource R is a temporally varying
membership function MR(t), which for time t maps to a set of entities,
or values, which are equivalent."  Two different resources may have
equivalent values, this is the point of the "author's preferred version"
discussion in Roy's thesis, Ch.; or, the resource is the
mapping, not the value.

> In reality they do of course refer to two different things (and the 
> conceptual mapping of what they refer is consistent), but web arch
> and the specs currently don't cover this use case, if you go by the
> specs you'll GET / from twitter.com via HTTP, receive text/html, and
> find no element with an @id of either "!/webr3" or "!/ericbow"

Right, I believe the debate is whether this use case should be
considered harmful; i.e. how, not whether, it should be covered.

> > each user's feed is a separate resource, is it not?
> Yes, and each user's feed is identified by it's own URI, and neither
> of the URIs you mention above refer to a users feed.

Then what, pray tell, *do* they refer to?  They're URIs, they have to
map to *something*, and that's the part I can't for the life of me
figure out, if they aren't aliases for the user's feed.

> > OTOH, since # has special meaning, any HTTP client will treat those
> > as being the same resource, i.e. dereferencing either one will
> > dereference http://twitter.com/ .  This makes http://twitter.com/
> > the *one* resource being dereferenced for *everything*.
> One would hope that an HTTP client noticed that http://twitter.com/
> and http://twitter.com/ were the same URI, and in both cases 
> http://twitter.com/ (when dereferenced successfully) does refer to
> the same thing, the start state of a web application / interactive
> document described in text/html, the semantics of the "conceptual
> mapping" are consistent.

This is the part that looks like an RPC endpoint to me.  If there were
a redirect involved, then it could be cached.  Instead, what to fetch
next must be calculated by a script, and that's where the problem lies
with this style.  The desirable property most impacted, is visibility.

> Note that http://twitter.com/ is only /one of the/ resources being 
> dereferenced (not the one), HTTP is the transfer protocol being used, 
> multiple resources will be dereferenced, each one through a stateless 
> client server interaction over the network (possibly, depends if
> caching comes in to play) via a uniform interface, the HTTP client is
> nothing but a connector in all of this. The job of HTTP in all of
> this is to act as a transfer protocol, not to constrain what can be
> referred to by a URI, and not to understand why it's being used.

The uniform interface is precisely the issue; following that style
leads to graceful degradation which apps using shebang fail to exhibit.
It should come as no surprise that I'm trying to understand this style
by determining where the mismatches are -- if there weren't any, there
would be graceful degradation, mooting this discussion.

> >>> In this case, the semantics of the mapping of http://twitter.com/
> >>> varies based on the nature of the fragment, which can only be
> >>> described as architecturally broken.
> >>>
> >> it maps to an application, are you telling me now that an
> >> application must not do anything?
> > 
> > I'm telling you that architecturally, that's backwards -- what we
> > have here, instead of being Code-on-Demand, is Content-on-Demand.
> > The former sends the content, along with instructions on how it is
> > to be rendered.  The latter sends the rendering instructions, along
> > with some instructions on how to find the content.  These are not
> > the same style.
> Sorry Eric, "content-on-demand" is something you just made up...

Yes, when presented with constraints I'm not familiar with, I'll put
words to them.  I was trying to avoid explaining why I think this
constitutes its own style, while deliberately contrasting it to a well-
known style.

> One could argue endlessly over whether javascript in HTML is
> via the "Code on Demand" optional constraint of REST, or whether it's
> part of the hypermedia type;

Theoretically, sure.  But here, I don't see any Link header, which
means the hypertext constraint isn't being met by the javascript --
that media type defines no declarative linking, only imperative XHR,
which is not the same thing.  The hypertext constraint is not met, when
the choice of state transition is embedded within XHR code.

> I see it as being part of HTML (as does the specification of HTML),
> some may see it as coming from the "Code on Demand" constraint of
> REST (but I see this more as java applets, flash, things which
> require the use of a different engine to be download and used), and
> it appears you're arguing that it's neither of the above.

No, my point is that Code on Demand isn't a substitute for the
hypertext constraint.  When state transitions embedded in script are
driving application state, you can't call it Code on Demand because
you're into some other, non-REST style.  I coined Content on Demand to
describe it, but in that other style it wouldn't be optional; nor would
that style have a hypertext constraint.  Just practicing some software
architecture... we need to identify this other style before we can
judge its merits relative to REST and the system goals at hand.

> Regardless, web applications are a /huge/ part of the web, have been
> for a long time, are here to stay. People can either embrace the web
> of applications and the web of data in addition to the good old
> fashioned web of documents, or they can fight against independent
> evolution and innovation in order to constrain the web. I know what
> my preference is.

This sounds like a strawman; it certainly is in my case, you've seen my
XSLT-driven demo which is doing something very similar.  The difference
mainly comes down to how things are linked together, and I'm certainly
a stickler on architecturally correct linking.  Choice of implementation
technologies has no impact on proper architecture.  I'm not against web
applications, I'm for pointing out where they have real-world problems
brought about by their ignorance of the deployed Web architecture.

> > These two URIs can also map to the same application, without any
> > naming collision:
> > 
> > http://twitter.com/webr3
> > http://twitter.com/ericbow
> There is no naming collision (the hashbang URIs are different, you
> said that yourself)

The shebangs collide by sharing a primary resource.  The million-dollar
question here remains, why can't twitter be linked with the non-shebang
URIs (which they're using anyway) driving application state through <a>
and <link> instead of through cryptic XHR code and bizarre fragment
"routing", regardless of UI goals?  I don't dismiss the possibility
that such a case can be made; I've yet to see it, however.

> the above are two different URIs which /do not/ map to applications,
> let alone the same application.

Right, but they're hidden behind a layer of indirection.  Not that I'm
against that per se -- I'm a big conneg guy, which means obviously I've
never met an indirection layer I couldn't love.  ;-)  But, only when it
makes sense, and the purpose here eludes me.

> > HTTP clients don't see those as the same resource, because there's
> > no fragment.  The semantics of the mapping remain static, and are
> > not a function of fragment contents.  They can even be handled by
> > different server processes, unlike with #! where they wind up being
> > handled by the same server process.
> HTTP Client is orthogonal here, separation of concerns, it's merely a 
> client connector for a transfer protocol (as covered above).
> The semantics of the mapping for http://twitter.com/ remain
> consistent (as covered above)...

Yes, yes, and everything is cacheable; that's not the issue, perhaps I
was a bit hasty there.

> > But, since those last two "proper" URIs are what get fetched anyway,
> > what's the point of imposing a round-trip to get there?  (That's a
> > rhetorical question -- obviously, such chicanery isn't needed when
> > proper architecture is followed from the get-go).
> "proper" URIs and a proper architecture eh - back to the good old
> days when twitter kept their application, and application state on
> the server, when countless amounts of data were resent on every GET 
> needlessly, when twitter couldn't scale and went down repeatedly, and 
> when latency was at an all time high, right, ok, let's encourage that.

The point is to encourage proper architecture, not Google-specific
kludges, as the solution to architectural failures.  CREST has some
good ideas for how to keep an application like Twitter architecturally
sound.  Obviously, REST must be extended to account for small-grain
data transfer, but it still applies as a fundamentally sound model of
the real-world network we're dealing with here.

> In all honesty, I'm not going to carry on the fighting progress
> negative conversations for long, innovation and evolution have been
> going on for ages, and I want to help and spur the process along and
> indeed adopt the new techs and approaches myself, not needlessly try
> and say "it's bad" when it's clearly not. From a few mails back:
> [[
> Ultimately there's nothing wrong with JS applications or using
> HashInURI, it should be encouraged, the problems are:
>    - failing to publish data properly in a visible / web friendly
> manner
>    - trying to reference data by using a URI that doesn't refer to
> data ]]
> That's what I'll be focussing on.

I would hope you extend that focus to exposing links via hypertext;
it's this failure that generates the negativity, by breaking the
architecture we have -- which considers XHR optional.

Received on Sunday, 13 February 2011 03:55:57 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:56:37 UTC