Re: [agenda] Web Performance WG Teleconference #81 Agenda 2012-09-12

On 13.09.12 13:47, "Boris Zbarsky" <bzbarsky@MIT.EDU> wrote:

>On 9/13/12 12:14 PM, Paul Bakaus wrote:
>> Precisely the fact that similar structures don't "copy" memory, but some
>> others do, is why we'd like to find out. Let's say I create 100 similar
>> formed objects (objects on a map), and only the x/y coordinates change ?
>> I'd like to know how much more of a memory footprint additional of these
>> objects consume to better gauge how large my total "pool" of objects
>> should be. Right now, the only thing we can do is create a JS test and
>> then watch the memory usage of the tab/browser rise.
>
>OK.  But the point is that creating a single object and measuring its
>memory usage won't tell you what you want here, right?  You actually
>have to create 100 objects and ask how much memory those are using as a
>set.  Or something?

Correct. We will still need to go that route, but can automate then
automate it inline.

>
>> I understand your argument and it is one I've been hearing from many
>> browser developers. We have the specific issue that we can only have a
>> given amount of textures loaded at the same time ? so imagine a game
>>that
>> has multiple levels and assets that to some extend change per level. We
>> need to be able to reliably understand how much memory the assets
>>consume
>
>My point is that this presupposes definitions for "memory" and "consume"
>that are not obvious to me.

Fair point. For memory, I'm differentiating between something that
resembles a "disk", and actual RAM (high-performance, non-persistent
buffers). I think most, if not all mobile and desktop computing systems I
am aware of follow that HW implementation model. "consume" is not obvious
to me either ? my whole point is that we need to make it obvious. It needs
to be obvious as a web developer to say "ah of course, that texture makes
my app slow because of a,b,c.".

>
>> Even though it doesn't guarantee anything, on most mobile platforms
>>today,
>> there's a single app usage pattern, meaning you use one app at a time,
>>so
>> it is reasonably safe to assume that memory consumption won't
>>dramatically
>> increase by another app thread on a device while browsing your site.
>
>But it can increase for other browser tabs, no?  Also for the browser's
>own internal bookeeping.

On mobile platforms I don't think so ? is there a mobile browser that
doesn't "freeze" tabs that are not in focus?

>
>>> What does this mean in a world where parts of GC happen async on
>>> separate threads?
>>
>> I wasn't aware of the fact that GC can happen async nowadays and have
>> never seen it in action.
>
>I said "parts of GC".  It happens all the time in Spidermonkey; for
>example finalization has been on a background thread for most gcthings
>since Firefox 6.  Other parts of GC happen on the main thread.
>
>It sounds like what you're really looking for is "pause time on main
>thread due to gc" not "gc duration"?  This also matters because some GC
>implementations are incremental, with GC yielding control after running
>for a bit but before finishing the GC.  Again, Spidermonkey certainly
>does this.

Yes, sorry, you are right. The pause time is what interests me, not the
total duration of the GC.

>
>> Same answer :) It is is happening on the UI thread, we need to know.
>
>Yes, but what do you need to know, exactly?  Total main-thread GC time,
>individual main-thread pause times, something else?

See above, pause times on the main thread are what interests me as a
developer.

>
>>> You would have to do a good bit of convincing on this one, I think.  My
>>> first reaction being somewhere between "heck, no" and "no way".  ;)
>>> This is a huge footgun, and one incredibly likely to get misused all
>>> over the place.
>>
>> Very true. This one can only be implemented if all of the above is
>> implemented - meaning, when the user has total awareness of what code or
>> asset uses memory (and when), when he is in theoretical full control,
>>this
>> option makes sense.
>
>I'm not convinced it does, because in my experience people have a
>tendency to think they know what's going on when they actually don't...
>  I mean in terms of the "total awareness" you describe, not in terms of
>competence.
>
>In particular, there are allocation sources that are effectively tied
>into GC that don't seem like they would be reported on if "all of the
>above" is implemented.  Worse yet, these are likely to be
>browser-specific, and it would be easy for pages to not know they need
>to check for them at all, even if they were exposed.
>
>> 1) I know the total number of assets my game requires
>> 2) I do all the asset/code pooling myself
>> 3) I'm reasonably confident to be able to keep the heap at a certain
>>level
>> on my own
>> 4) I want no-compromise, maximum execution performance
>
>I think that #3 is a huge assumption that is likely to be false in
>practice...
>
>> This is a scenario where I'd like to try disabling the GC. You know this
>> better than I though ? is this technically simple?
>
>Disabling GC?  It's obviously pretty easy technically to add a "don't
>GC" flag.  Making it not just explode in people's face is the hard part.
>
>How do you see the "no gc" thing working when multiple pages share a GC
>heap?  Would you be just trying to turn off page-local GC or also global
>GC?


Global GC is hazardous, as you should only be able to disable GC in
situations where, theoretically at least, you are under full control. But
I can see your points. I think my proposal to disable GC completely is the
last thing we should try, if all other attempts of educating the user how
to keep main thread pauses below a certain threshold fail.

I also agree that people tend to know what's going on and in reality
don't. This is a huge problem we need to overcome. If anything, I teach my
co-workers how browsers work ? and even though I had a lot of exposure
with vendors and internals, I still feel very green and insecure about
certain ways to optimize.

>
>-Boris
>

Received on Monday, 17 September 2012 10:09:43 UTC