Re: [agenda] Web Performance WG Teleconference #81 Agenda 2012-09-12

Hey Boris,

On 12.09.12 22:51, "Boris Zbarsky" <bzbarsky@MIT.EDU> wrote:

>On 9/12/12 9:02 PM, Paul Bakaus wrote:
>>   * We donšt know how much memory a newly created object or function
>>     allocates
>It depends on how it's allocated.  Furthermore, data structures are
>shared between elements, so allocating N objects that are sort of
>similar does not use N times the memory for a single object...
>What information do you really want to gather here?

Precisely the fact that similar structures don't "copy" memory, but some
others do, is why we'd like to find out. Let's say I create 100 similar
formed objects (objects on a map), and only the x/y coordinates change ­
I'd like to know how much more of a memory footprint additional of these
objects consume to better gauge how large my total "pool" of objects
should be. Right now, the only thing we can do is create a JS test and
then watch the memory usage of the tab/browser rise.

>>   * We donšt know when an uncompressed, compressed or both
>>     representations of an image is kept in memory and for how long
>Some of this is implementation details that are hard to expose to script
>in sane ways.  Starting with the concept of "in memory".  For example,
>Gecko on some OSes doesn't even store decompressed images in its own
>address space; they're stored in the X process.  And/or stored in VRAM
>instead of RAM, independently of the X thing.
>Exposing some overall statistics on this would probably be ok.  Exposing
>detailed information about a specific image that's expected to be
>available synchronously and always correct is something I would be
>opposed to, because it precludes parallelization opportunities that I
>think we want to take.

I understand your argument and it is one I've been hearing from many
browser developers. We have the specific issue that we can only have a
given amount of textures loaded at the same time ­ so imagine a game that
has multiple levels and assets that to some extend change per level. We
need to be able to reliably understand how much memory the assets consume
that are required for that level (all images that will be used throughout
the level) and need to be able to unload these that we do not need right
now. Unfortunately, this is an urging real world problem.

>> We need information on when layers are created, destroyed and
>> recomposited.
>This happens asynchronously, in many cases in a different process or
>different thread.  Synchronously exposing information about this to the
>DOM would be highly undesirable as we try to parallelize layout engines.
>  Async notifications are likely possible.

Async notifications are totally fine.

>>     4. Total available memory
>There are some fingerprinting concerns here.  And of course as you say
>the number gives you basically 0 in the way of guarantees...

Even though it doesn't guarantee anything, on most mobile platforms today,
there's a single app usage pattern, meaning you use one app at a time, so
it is reasonably safe to assume that memory consumption won't dramatically
increase by another app thread on a device while browsing your site.

>>   * Understand the execution interval of GC (i.e an event triggered on
>>     the window)
>What does this mean in a world where parts of GC happen async on
>separate threads?

I wasn't aware of the fact that GC can happen async nowadays and have
never seen it in action. If so, that would mean that GC would never block
the UI thread, in which case I'm happy remaining in the dark about these

So in short ­ as soon as this is the case across browsers, I don't need
this feature any longer.

>>   * Understand the time a GC took (in order to optimize our framerate
>>     against it, could be reported through the same event)
>Same question.

Same answer :) It is is happening on the UI thread, we need to know.

>>   * Disable GC and only trigger it manually
>You would have to do a good bit of convincing on this one, I think.  My
>first reaction being somewhere between "heck, no" and "no way".  ;)
>This is a huge footgun, and one incredibly likely to get misused all
>over the place.

Very true. This one can only be implemented if all of the above is
implemented - meaning, when the user has total awareness of what code or
asset uses memory (and when), when he is in theoretical full control, this
option makes sense. At this point, I could see this scenario:

1) I know the total number of assets my game requires
2) I do all the asset/code pooling myself
3) I'm reasonably confident to be able to keep the heap at a certain level
on my own
4) I want no-compromise, maximum execution performance

This is a scenario where I'd like to try disabling the GC. You know this
better than I though ­ is this technically simple? Maybe we could produce
a test build of a browser with that feature and we could prototype a
little to see if it is worth exploring that path?

Thanks for your feedback!


Received on Thursday, 13 September 2012 11:15:03 UTC