W3C home > Mailing lists > Public > whatwg@whatwg.org > October 2009

[whatwg] <a onlyreplace>

From: Jonas Sicking <jonas@sicking.cc>
Date: Fri, 16 Oct 2009 22:22:11 -0700
Message-ID: <63df84f0910162222m27e3b54t91fc37829e0d669@mail.gmail.com>
On Fri, Oct 16, 2009 at 11:06 AM, Tab Atkins Jr. <jackalmage at gmail.com> wrote:
> Promoting this reply to top-level because I think it's crazy good.
>
> On Fri, Oct 16, 2009 at 11:09 AM, Aryeh Gregor <Simetrical+w3c at gmail.com> wrote:
>> On Fri, Oct 16, 2009 at 10:16 AM, Tab Atkins Jr. <jackalmage at gmail.com> wrote:
>>> As well, this still doesn't answer the question of what to do with
>>> script links between the static content and the original page, like
>>> event listeners placed on content within the <static>. ?Do they get
>>> preserved? ?How would that work? ?If they don't, then some of the
>>> benefit of 'static' content is lost, since it will be inoperable for a
>>> moment after each pageload while the JS reinitializes.
>>
>> Script links should be preserved somehow, ideally. ?I would like to
>> see this be along the lines of "AJAX reload of some page content,
>> without JavaScript and with automatically working URLs".
> [snip]
>> I'm drawn back to my original proposal. ?The idea would be as follows:
>> instead of loading the new page in place of the new one, just parse
>> it, extract the bit you want, plug that into the existing DOM, and
>> throw away the rest. ?More specifically, suppose we mark the dynamic
>> content instead of the static.
>>
>> Let's say we add a new attribute to <a>, like <a onlyreplace="foo">,
>> where "foo" is the id of an element on the page. ?Or better, a
>> space-separated list of elements. ?When the user clicks such a link,
>> the browser should do something like this: change the URL in the
>> navigation bar to the indicated URL, and retrieve the indicated
>> resource and begin to parse it. ?Every time an element is encountered
>> that has an id in the onlyreplace list, if there is an element on the
>> current page with that id, remove the existing element and then add
>> the element from the new page. ?I guess this should be done in the
>> usual fashion, first appending the element itself and then its
>> children recursively, leaf-first.
>
> This. Is. BRILLIANT.

[snip]

> Thoughts?

We actually have a similar technology in XUL called "overlays" [1],
though we use that for a wholly different purpose.

Anyhow, this is certainly an interesting suggestion. You can actually
mostly implement it using the primitives in HTML5 already. By using
pushState and XMLHttpRequest you can download the page and change the
current page's URI, and then use the DOM to replace the needed parts.
The only thing that you can't do is "stream" in the new content since
mutations aren't dispatched during parsing.

For some reason I'm still a bit uneasy about this feature. It feels a
bit fragile for some reason. One thing I can think of is what happens
if the load stalls or fails halfway through the load. Then you could
end up with a page that contains half of the old page and half the
new. Also, what should happen if the user presses the 'back' button?
Don't know how big of a problem these issues are, and they are quite
possibly fixable. I'm definitely curious to hear what developers that
would actually use this think of the idea.

/ Jonas

[1] https://developer.mozilla.org/en/XUL_Overlays
Received on Friday, 16 October 2009 22:22:11 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 16:59:18 UTC