- From: Yoav Weiss <yoav@yoav.ws>
- Date: Tue, 2 Oct 2018 21:51:37 +0200
- To: public-web-perf <public-web-perf@w3.org>
- Message-ID: <CACj=BEjbtU4_5k+19doSRY0M1NiRqnfyCoYYK79z8+G0QA0HXg@mail.gmail.com>
I'd like to apologize for not sending out an invite before this evening/morning design call. Minutes from the call are available <https://docs.google.com/document/d/e/2PACX-1vRI4Ys5h-nzEDGetXrwbEtX9n0VHicAfpAhQ23maXUUmTr18bSjvL7fCjoB461oaoVYKRKzu67IVVgv/pub> : WebPerfWG - October 2nd minutes Present: Nate Schloss, Andrew Comminos, Tim Dresser, Vladan Djeric, Panagiotis, Shubhie Panicker, Todd Reifsteck Chair: Yoav Weiss JS profiling API proposal Slides <https://www.google.com/url?q=https://docs.google.com/presentation/d/19jT7AAjI9L_twWmGAIykHrRHjj0PGv0cQ54AzHjua30/edit%23slide%3Did.p1&sa=D&ust=1538513096412000> and proposal <https://www.google.com/url?q=https://vdjeric.github.io/js-self-profiling/&sa=D&ust=1538513096412000> Andrew: Not easy to measure JS performance in the wild without intrusive instrumentation. Polyfilled the API - worker thread sampling using shared array buffer. Solution - add a JS profiling capability to the perf APIs For security avoid sharing frames for X-origin scripts Want the API to be used for client side and server side aggregation API is straight forward: JS demands a profiler to be started. Multiple scripts will all spin a single profiler and co-exist. Trace format returned is very similar to V8 and Gecko trace format, can reuse infrastructure Want the API to support more types: GC, etc. Design goal is to limit what client JS gets from this profile, so the UA decide the characteristics of the sampler and adds that to the trace Can easily serialize and send to the server Can also modify on the client and send the server with the results. One big concern is privacy: Want to filter cross origin scripts In terms of size, 45 seconds of JS traces have under 1MB output with 1ms sampling Tim: Discussion on GH about function calls being optimized out, if functions don’t have side effect, should they be reported necessarily? Andrew: I don’t think that not reporting them would be a big drawback. Tim: would not including functions with no side effects make this implementable? Ryosuke: Implementable but not sure if desirable Tim: It certainly simplifies profiling Ryosuke: yeah, but the profiler will slow down JS execution by 5-10%. So not sure we want that everywhere Andrew: Should be a function of the sample rate. So if sampled, shouldn’t be impacting everyone Tim: Sounds like we’re concerned about misused, so maybe we can make the API harder to be a footgun Andrew: Adding a new capability to profile is worth the risk of misuse Ryosuke: Measuring performance impacts performance. But sampling may be good enough Andrew: instrumentation has severe drawbacks: either ship to everyone or get cache misses This has no impact on non-sampled users The polyfill on Firefox has less than 5% impact Ryosuke: how does that isolate scripts from different origins? Do we need to track script origins now? Andrew: scripts would contain the spawning origin Ryosuke: but we can’t have all the scripts contain the frame that referred to them Panagiotis: what happens if you have profilers in multiple tabs? Overhead can get rather large. Overhead of 5-10% may be optimistic. Andrew: We can set limits on instantiation. Nate: Seems like there’s a log of interest from large properties Vladan: interest in third parties providers supporting this. Andrew: next steps polishing the spec and implementing in Chrome Scheduling API <https://www.google.com/url?q=https://github.com/spanicker/main-thread-scheduling&sa=D&ust=1538513096416000> Shubhie: Following main thread contention, difficult to build a responsive app. Trying to break up main thread work and use workers better. Trying to make that easier. Focus on main-thread scheduling for this presentation. Looked at different JS schedulers: maps, react Found a number of gaps in the platform where a lower level API should be The schedulers aim to achieve high frame rate while being responsive to user input without starving work. In maps the have a lot of fetches done while the user interacts with it, so all that work needs to be coordinated. Similarly in react async rendering and user input. Nate: And the react scheduler is way more inefficient than it can be Tim: We want to expose the low-level primitives for e.g. Facebook to do a better job, but also want to expose high-level APIs that enable developers to indicate importance of tasks and let the browser do the right thing Requirements: Get out of the way of important work Schedule work reliably without invoking rendering Current workarounds rely on RAF so have an overhead and tying the tasks to rendering competes with rendering itself. So want people to be able to post a microtask Ryosuke: How is it different from posting a message? Shubhie: postMessage often done at the end of the RAF, indirect and doesn’t indicate priority to the browser. Ryosuke: how is that different from rIC? Shubhie: rIC is about idle time work, and you may never be idle. Can create starvation. Tim: So we want setImmediate? Shubhie: yes, basically Todd: use-cases look at after RAF and setImmediate, aren’t those different use-cases? Shubhie: yeah, there’s microtask priority and default priority. Todd: If you’re in the middle of a frame, with 3 ms to paint, you want to know how far along are you, and then schedule the task either before or after the paint, no? Shubhie: Sometimes that’s the case, but sometimes the work is independent of rendering. Tim: Having that information seems useful Todd: If we expose a task queue with priorities, the browser will implement them, but we’d have to specify the task queue execution for interop Shubhie: yeah Ryosuke: Priorities are really hard. Boosting tasks can get really tricky. There be dragons Shubhie: Agreed that we need to keep it simple. Suggest we fill in the minimal platform gaps. Beyond that we need a high level API. There is a coordination problem with multiple scripts. Want to plug bare-minimum gaps. Ryosuke: need to look at maps and react and see which hacks can be replaced. Shubhie: Did that. Happy to dive into the case studies. Next steps, continue discussion on that Todd: would be great to share the test case analysis. I like the direction you’re going.
Received on Tuesday, 2 October 2018 19:52:12 UTC