Minutes from WebPerfWG call - Thu Jan 10 @ 11am PST

On Tue, Jan 8, 2019 at 6:10 PM Yoav Weiss <yoav@yoav.ws> wrote:

> Hey all,
>
> Please join us on our next call (hangout
> <https://meet.google.com/nog-ttdz-myg?hs=122>) this Thursday, that will
> be mostly focused on new feature design.
> On the agenda
> <https://docs.google.com/document/d/10dz_7QM5XCNsGeI63R864lF9gFqlqQD37B4q8Q46LMM/edit?pli=1#heading=h.rv76tfrvm3ku>
> we currently have:
>
>    - In-progress resource requests - npm
>    - hasPendingUserInput feedback - acomminos
>    - LayoutStability feedback - Greg Whitworth & Rossen Atanassov
>    - Graduating Resource Timing L2 - yoav
>
> If there's anything else you want to discuss, feel free to add it to the
> agenda.
>
> See y'all there! :)
> Yoav
>
> P.S. If you want to be added to the meeting's calendar invite, feel free
> to ping me.
>

Thanks to all who participated in the call!
Minutes are now available
<https://docs.google.com/document/d/e/2PACX-1vRT4wJUFKzJKx2VjDIWc19g-tdIzkAnbLsS21dw03yyWQz8NBoGsgEXfTzGIme-0DAK7SmV_L4RDIV-/pub>.
Copying them here for safe keeping:

WebPerfWG call - Jan 10 2019 - meeting minutes
Participants

Mathias Bynens, Gilles Dubuc, Tim Dresser, Steve Kobes, Yoav Weiss, Andrew
Comminos, Nicolás Peña, Nate Schloss, Nic Jansma, Greg Whitworth, Phil
Walton, Shubhie Panicker, Todd Reifsteck
LayoutStability
<https://www.google.com/url?q=https://gist.github.com/skobes/2f296da1b0a88cc785a4bf10a42bca07&sa=D&ust=1547157796491000>feedback
- Greg Whitworth

Tim: The basic objective is to quantify content jumping around, annoying
users. It’s not perf related. Tried to formulate it so that a well behaved
page will get a score of 0. (module some animation edge cases)

Greg: I knew about the use-case. My main issue is that layout perf is a
common problem with partner teams. The top worry is the name, would prefer
to better scope the name to avoid confusion.

In the example, it’s typically an external resource that’s moving things
around.

Tim: So you also want attribution?

Greg: Would be great if v2 would also provide insights into where the
problem is coming from.

Tim: thoughts on “layout stability” as a name?

Greg: Afraid that people will reach out to it to solve unrelated layout
problems, so prefer the name not to include “layout” in it. Don’t want
people to reach out to this to quantify unrelated layout issues.

If you have flexbox with containers in it and you animate them, it can be
janky on some hardware, which this will not alert. This is focused on page
load layout.

Tim: Next step is to propose a bunch of names and bikeshed!
Expose JavaScript code caching information in PerformanceResourceTiming
<https://www.google.com/url?q=https://github.com/w3c/resource-timing/issues/190&sa=D&ust=1547157796492000>
-
Mathias

Mathias: JS engines (V8, SpiderMonkey, and coming to JSC) implement code
caching. Heuristics in implementations may differ, but it’d be good if
developers can see the benefits they get from it.

The proposal adds one boolean property to RT entries so that developers can
split their data based on cached code vs. uncached code.

Posted a GH issue. Gmail is interested in knowing the benefits of code
caching.

Ilya: Do we have a sense of impact here? Would an optimized site be faster?

Mathias: Parse + compile effectively goes away, you can start running code
right away after loading it from disk, so code-caching makes things
significantly faster. Working on documentation for our V8 heuristics right
now. Blog post coming.

Todd: There’s still disk I/O cost. Parse and compile costs are replaced by
serialization/deserialization costs. There are differences between
implementations on that front.

Tim: We previously talked about adding a property that exposes “processing
time”. Would it include that?

Mathias: I support that attribute, but worthwhile to include both

Phil: Instead of exposing a boolean, can we expose parse+compile time?

Mathias: It’s easier to know a boolean than to trace particular costs, so
there may be perf costs

Gilles: Why would my JS be cached but not code-cached? What can I do about
it?

Mathias: Different heuristics per implementation, but the Gmail team
revealed detailed plans on how they would use this API if we do decide to
add it.

Phil: In Chrome, you can use SW to increase the amount of code cache

Tim: How does this relate to threaded/streamed parse? We want to report
time on main thread?

Todd: Want to report blocking time. Background time should be reported, but
won’t have UI impact.

Mathias: In favor of exposing more, but would be great to get out the
information without a significant cost. Code caching fits that.

Tim: Don’t we measure the duration of parse and compiles anyway in Chrome?

Mathias: Without DevTools open or without tracing enabled? Not sure.

Tim: Would be good to see if other browser vendors think that it’s worth it.

Todd: Missing due to geographical issues

Nic: Would love that for our reporting. Did one off testing to benchmark
that, but would be great as a RUM metric.

Nate: Similarly, would be great to correctly measure this everywhere. We
kinda guess this, but exposing it would be better.

Nic: Understanding more about the computational complexity would be good.

Nate: Can also help us to optimize our bundles

Mathias: Might be good to expose why scripts weren’t code-cached, but as a
devtools feature rather than an API. It’s too implementation-dependent to
be an API.

Todd: Are there device or condition specific heuristics? They can benefit
from having heuristics exposed

Yoav: Yeah, but this takes us most of the way there

Ilya: Privacy considerations?

Mathias: I don’t think this exposes more information than what is already
available through timing attacks.

Ilya: I guess that goes back to when a resource is cached but not
code-cached. That may provide extra entropy. Worth exploring in the
explainer.

Nicolás: Script from multiple origins, can the new caller know that it ran
before?

Mathias: Not in Chrome’s implementation; we key on URL.

Tim: Also, caching is already exposed. Todd, do we want to expose the time
it took for processing, beyond caching.

Todd: Yeah. In Edge, cache can sometimes be worse than recomputing if I/O
is the bottleneck.

Tim: So worthwhile to have a boolean vs. a processing time attribute.

Ilya: But processing time can still answer some of the use-cases.

Todd: As browser makers, it can be useful.

Mathias: AI - will look into fingerprinting concerns
In-progress resource requests
<https://www.google.com/url?q=https://docs.google.com/presentation/d/1x6QTUdrXtk0faWT1zOTIPdyoKno3WFwhzB3L-mqNYxY/edit?usp%3Dsharing&sa=D&ust=1547157796496000>
-
npm

Nicolás: A more concrete proposal following TPAC. Use-cases: network
quiescence, busy indicator. Want to expose an array of in-the-air requests.
This plus PO give you all the information you need.

Not using the observer approach as some use-cases require the information
to be available immediately, which doesn’t fit the observer async pattern.

Concrete IDL proposal in the slides. Names may not be ideal.

Includes the name of the initial request not including redirects. Also
include initiatorType, similar to RT.

Yoav: initiatorType is not very useful in current form, so may not want to
replicate it here.

Ilya: What’s “available immediately”? What’s the requirement that prevents
the PO pattern?

Nicolás: As soon as the main thread is aware of a fetch, it should expose
it, vs. next task in PO

Ilya: What would that enable?

Tim: the network spinner use-case will be one task behind, so may miss
stuff.

Ilya: Ok, so post-onload, I need to know what up in the air ATM.

Todd: Could be, if the entries are enqueued immediately. So nothing is
missed.

Tim: It’s fairly rare here that you want an observer. I don’t think that we
have use-cases that actually want that.

Phil: use-cases are valid, but maybe it should be exposed in Fetch?

Nicolás: Fetch doesn’t currently have any monitoring APIs, but maybe? Seems
like the use-cases are performance related.

Tim: The fact that this looks like RT is a good argument to keeping it here

Ilya: We also want to add a Fetch ID to RT. Maybe worth to ask Anne and
others. We considered a FetchObserver before so this may be related.

Todd: Would also be helpful to explore the use-cases and example.

Nic: Something like initiatorType is very valuable here. Care about some
resource types.

Yoav: maybe request.destination, similar to RT L3

Nic: Would love to get notified rather than polling. If we’re monitoring an
SPA, we hook into the start event, but need to know when to stop. Current
IDL will force polling, rather than just wait.

Tim: You could re-poll every time a resource is done

Nic: That would give you slices of the picture, but not the full picture.

Nicolás: can a low-priority async task work?

Nic: Immediate will be better.

*Let’s continue offline*
hasPendingUserInput feedback
<https://www.google.com/url?q=https://docs.google.com/presentation/d/1yZYsoOJMysQdOSINLOAgHEQW_dUQrTEY97zVytbAIhE/edit?usp%3Dsharing&sa=D&ust=1547157796499000>
-
acomminos

Andrew: Nate and I prototyped and have some feedback. Proposed as a more
intelligent way for schedulers to yield. Want to get it in React scheduler.
Might make more sense to make DOM UI events.

Want it to be extensible, so we won’t have to add every new event to the
spec. A wildcard event may be worthwhile. Any particular feedback?

Two options to spec it:

Option A, if there’s a pending user input, UA must return it. Option B, may
return it.

A would be better for developer consistency. B would give UAs more freedom,
where UAs can heuristically change the events exposed.

What’s the browsing contexts in which events should be exposed? Broadest
scope is best for devs, but can have privacy implications. For a foreign
origin, it can expose data

E.g. user password length in an iframe.

Shubhie: The event is on `window`?

Andrew: yeah

Shubhie: Seems strange to put it on `performance`

Andrew: Sure. Maybe Tim & Nicolás can explain why it’s better there.

Tim: Other scheduling primitives are on `window`, right? So Ok to put it on
`window`

Shubhie: maybe a new `scheduling` place to put those API.

Todd: Agree that it shouldn’t be on the performance object

*everyone agree*

Tim: The scope can be tricky in the chrome implementation. Currently we
don’t know the target frame in cross origin same process case.

Andrew: Implementation would be simpler if we can ignore cross-origin. But
we can’t.

Todd: Why a sequence?

Andrew: Imagine events like drag and drop. They may be have higher priority

Yoav: Next steps

Andrew: Can open an issue on Tim’s repo and iterate there.

Yoav: I’ll talk to Tim to see if the repo can be moved to a more official
venue.

Received on Thursday, 10 January 2019 21:04:53 UTC