W3C home > Mailing lists > Public > whatwg@whatwg.org > August 2013

Re: [whatwg] Question on Limits in Adaption Agency Algorithm

From: Yasuhiko Minamide <minamide@cs.tsukuba.ac.jp>
Date: Fri, 02 Aug 2013 15:32:14 +0900 (JST)
Message-Id: <20130802.153214.537764918763355929.minamide@cs.tsukuba.ac.jp>
To: ian@hixie.ch
Cc: jgraham@opera.com, uezato@score.cs.tsukuba.ac.jp, whatwg@whatwg.org, hsivonen@iki.fi, minamide@cs.tsukuba.ac.jp, w3c@adambarth.com

On Mon, 1 Jul 2013, Ian Hickson wrote:
> 
> One option would be to remove from the stack of open elements any 
> elements that we are skipping when we bail out of the AAA.
> 
> Can anyone see a problem with doing that?

I think that this solves the issue and clarifies the behaviour of
the parser.

> 
> The limits are really intended to reduce the memory consumption of the 
> algorithm, not its running time. Elements are expensive, and this 
> algorithm can create exponentially more elements than tags in the markup, 
> when it's not limited.
> 

Although I agree with the revision above, I would like to clarify
the behaviour of the AAA algorithm.

I don't think that the AAA algorithm can create exponentially more elements
even if it's not limited.

Let D be the depth of the stack of open elements and S the size of the DOM
tree that has been constructed so far.

Then after application of the algorithm:

* The size of the DOM tree is at most S + D. 
* The depth of the stack of open elements is at most D.

Even if the algorithm is applied n times, the size of the DOM tree is bounded
by S + nD. (not S * 2^n)

In total, I think that the number of elements in the DOM tree is in 
O(m^2) where m is the number of tags in the source HTML.

Yasuhiko Minamide
Received on Friday, 2 August 2013 06:32:41 UTC

This archive was generated by hypermail 2.3.1 : Monday, 13 April 2015 23:09:23 UTC