- From: Shelby Moore <shelby@coolpage.com>
- Date: Thu, 21 Oct 2010 12:32:59 -0400
- To: "Boris Zbarsky" <bzbarsky@MIT.EDU>
- Cc: "www-style list" <www-style@w3.org>
> On 10/21/10 11:51 AM, Shelby Moore wrote: >> The constraint algorithm places each 4D relationship vector into a hash >> table, hashed on element for each of the elements in the pair of the >> relationship. It also places each derivative relationship into these >> hash >> tables. The size of the hash table is the combination n! / (2 * (n-2)!) >> = >> n * (n - 1) / 2. > > OK. So in the typical web page case of N in the thousands to hundreds > of thousands, we're talking 1e6 -- 1e10 constraints on those thousands > to hundreds of thousands of variables, correct? Good point. I am glad we can get some initial thinking about quantification. The key may be how multi-level hierarchy is handled. I was supposing it is necessary that the hierarchy places some restrictions on combinations, as is the case with the CSS box model. > And these are constraints on variables with, typically, positive reals > or reals as domains, correct? I have not yet contemplated how large the per table entry data structure is but if is less than 1024 bytes, then n = 1024 is less than 1 GB of virtual memory (hopefully physical memory), and let's just ballpark guesstimate on the order of less than Gflops. I think current gaming CPUs are on the order of Tflops now. There are probably optimizations too, for example when a grouping all has the same relationship (e.g. the equivalent for {display:inline}). > It's not clear to me that this setup is implementable on current > hardware in a way that would actually be usable.... am I missing > something? I have to admit that constraint solvers are not my strong > suit.... I think I countered above. I do appreciate you raising the concern. You have been helping focus my thought process.
Received on Thursday, 21 October 2010 16:33:31 UTC