Re: WebPerfWG call - Mar 16th @ 7am PT / 10am ET

Minutes are now available:

     Linked to from our WebPerf WG Agenda document 
<https://docs.google.com/document/d/10dz_7QM5XCNsGeI63R864lF9gFqlqQD37B4q8Q46LMM/edit#>
     Published to the web-performance Github meetings page 
<https://w3c.github.io/web-performance/meetings/>
     ... and copied below:


    Participants

  * Chengzhong Wu, Fergal Daly, Ming-Ying Chung, Noam Helfman, Nic
    Jansma, Yoav Weiss, Carine Bournez, Barry Pollard, Pat Meenan, Sia
    Karamalegos, Annie Sullivan, Andrew Galloni, Beri Lee, Justin
    Ridgewell, Michal Mocy, Scott Haseley, Sean Feng, Amiya Gupta, Alex
    Christensen, Alon Kochba,


    Admin

  * Next meeting: March 30, 2023 @ 11am EST / 8am PST
  * TPAC 2023

  * Sevilla, Spain, 11-15 September


    Minutes


      AsyncContext - Chengzhong & Justin (TC39)

  * Chengzhong: Working on AsyncContext proposal
  * ...
  * ... Implicitly propagates values through callstacks
  * ... Captures global context and in the returned callback
  * ... Has a run() and get() methods, running get/set operations on the
    instance
  * ... run() runs a callback w/ value set in context
  * ... get() method returns the value in the current context of the
    instance
  *

  *
  * ... Call it with different expected values and context values
  * ... In this function we're waiting on a promise, and the context is
    persisted across the async boundaries, even if it's being called in
    a sync way
  * ... Values in context are persistent across async continuations
  * ... Helpful when we are tracing soft navigations
  *
  * ... Critical when HTML was updated, which operation initiated these
    chains of interaction
  * ... There can be also useful for transitive task attribution
  * ... Like runtime schedules tasks w/ priorities, task priority
    attribution is not transitive at the moment
  *
  * ... We can perform fetch and wait for the response, and the priority
    can still be observed
  * ... This can be useful for execution priorities and fetch
    priorities, and even used in privacy protection propagation
  *
  * ... In OpenTelemetry, save their spans in AsyncContext and can
    retrieve spans to see what started the interactions
  * ... Cannot introduce new parameters to existing libraries, so they
    can use AsyncContext to propagate their spans
  *
  *
  * ... Propagation of spans via OpenTelemetry can utilize this
    AsyncContext even for nested fetches
  * ... Can help distinguish the initiator of the fetch call
  * ... Seamless without user-code changes
  * ... Generate spans as shown in the example
  *
  * ... User Request plus their detailed network timings
  * ... OpenTelemetry support of Long Tasks Initiator
  *
  * ... Tracking Long Tasks effectively to see where it was spawned from
  * ... Roughly equal to a new PerformanceObserver of LongTasks, but no
    way to tell whether task function was called in a tracing span
  * ... If integrated with AsyncContext, userland application monitoring
    can tell the LongTask initiator in the PerformanceObserver callbacks
  *
  * ... Cannot tell exact initiator of the ResourceTiming entries
  * ... Tagging with AsyncContext can help figure out the cause without
    looking at just times
  * Yoav: Thank you for outlining all of the use-cases
  * ... For ResourceTiming, we have an open issue around adding more
    initiator data for ResourceTiming.  We have initiatorType, we'd like
    to add more data to the entry.
  * ... For some of those things, it may be better to bake in that
    attribution that you're looking for, rather than jumping through
    JavaScript hoops to get it.
  * ... I'd be interested to know if anyone collecting RUM in the room
    find use-cases that they've tried to do
  * Nic: This could help in a couple of things we’re doing - monitoring
    soft navigation without additional information we’re looking for
    route changes and observing changes on the page. But things can
    happen outside of the user’s activity (e.g. timers). Those can
    extend the duration of time that we’re looking at
  * … user interactions and how they relate to resource load and long
    tasks can make RUM more precise by keeping track of things that are
    the “child of the click”
  * Justin: the use case - you have running JS code but trying to
    understand what spawned the JS code? You wouldn’t be able to inspect
    the global state from outside the promise.
  * … you would be able to know that whatever’s currently executing was
    spawned from a user click
  * … Inside the running code you’d be able to know the context, but if
    you just have a callback, you can’t determine the context that
    spawned it
  * Nic: As a RUM observer, we’d be treating the runtime as a black box.
    I was imagining us listening to clicks on the page and mutation
    observers. If these activities were happening, we could see the
    groups of async contexts and group them by that
  * Justin: we could expose it on the callback, so you’d be able to know
    that this task was spawned from that context
  * Nic: Would an observer also have access to the context in this proposal?
  * Justin: You would not be able to inspect the state inside the
    callback, unless it’s running.
  * … If the callback calls into your code, you’d be able to observe state
  * Nic: In the context of PerformanceObserver, would the context be
    available?
  * Justin: You’d be able to expose your own context on the performance
    observer, but not get the context that spawned the entry
  * Nic: As a RUM provider, we’re observing, not creating
  * Justin: As a library you can create an async context, instrument
    click events, and then inspect the state of your own asyncContext
  * Chengzhong: OpenTelemetry does that and instruments entry points to
    be able to inspect code
  * Nic: I’d have to better understand it. If there are ways we can pay
    attention to this without creating our own contexts, that’d be
    interesting.
  * Michal: PerfObserver registers a single observer once. If you’re
    creating new async contexts when new events fire, are they linked?
  * Justin: You would instrument your events and call context.run with
    the callback code. Then whenever that callback is called later,
    you’d know which asyncContext was run
  * …
  * … Whenever the “done” event is detected, you’d be able to know which
    value was placed in line 3
  * Michal: PerfObserver is not part of that context though
  * Chengzhong: We’d need a separate proposal for PerfObserver capture
    that context
  * Nic: Feedback?
  * Chengzhong:https://github.com/tc39/proposal-async-context
    <https://www.google.com/url?q=https://github.com/tc39/proposal-async-context&sa=D&source=editors&ust=1679063198848914&usg=AOvVaw1NBU9CsMV_SRt6G9H70rjL>


      PendingBeacon design questions
      <https://www.google.com/url?q=https://docs.google.com/presentation/d/1w_v2kn4RxDmGQ76HAHbuWpYMPj7XsHkYOILIkLs9ppY/edit%23slide%3Did.p&sa=D&source=editors&ust=1679063198849380&usg=AOvVaw2lzkT6Z0GUzpmePYF8OZfI> -
      Fergal & Ming-Ying

  * Ming-Ying: Working on PendingBeacon
  * ... Motivation is to have a more reliable way for beaconing
  * ... Want to provide an alternative API
  *
  * ... Reliable mechanism for delaying request-sending until page
    discard (unload/bfcache)
  * ... Request needs to be kept alive until it's finished
  *
  * ... New PendingRequest, subclass of Fetch's Request
  * ... Subset of Requests' fields updateable here
  * ... pending state tells whether the request is still pending
  * ... And provide a sendNow() method
  * ... Transforms the PendingRequest object into a regular Fetch
    Request object
  * ... Setting sendAfterBeingBackgroundedTimeout
  * Fergal: API is very similar to what we've been proposing before,
    PendingBeacon.  We've moved away from an object that manages the
    whole thing, to a Request that goes into Fetch.
  * ... It's familiar to before, but transitioned to this
  * ... The Explainer still describes the original one, this is more recent
  *
  * Ming-Ying: Constructor takes only these fields.  Other like
    keepAlive are not available.
  *
  * ...Examples here.  PendingRequest creation for when the page gets
    discarded
  *
  * ... Example to sending PendingRequest immediately
  * ... You can update the beacon data when it's still pending
  * ... Cancel or abort a request
  *
  * ... After page hits 1 minute of hidden, it'll kick off the beacon
    sending process
  *
  * ... More concrete example
  *
  * ... Would like feedback on the API shape
  * Noam: If there's no backgrounded timeout, when does it get sent
    (unload?)
  * Ming-Ying: Basically unload, an "implicit" unload.  But browsers can
    do more, i.e. crash recovery, etc.
  * Fergal: There's no unload of a page in bfcache, there's no unload.
      But we will send a beacon
  * Noam: Not during the "unload" event but something similar
  * Fergal: At document destruction
  * Noam: Feedback on the API shape, I'm a little confused why going
    with Subclassing, it shadows a lot of stuff from Request.  Changes
    the meaning in many ways.  Is there an advantage?
  * Ming-Ying: One reason is we want to use Fetch API
  * ... We don't want user to be able to configure other fields
  * Fergal: The previous proposal didn't do this at all, so we've had a
    strong push from Apple to try to use the Fetch API
  * ... It's possible to not subclass and to say Fetch would take X or Y
  * Noam: Or a mix-in style where if Request has an AbortSignal it has a
    Pending signal.  An extension rather than a subclass.
  * ... Regular request but something else can happen with it
  * ... A beforerequest callback kind of thing
  * ... Subclassing gets confusing at times, with shadowing things
  * Fergal: We'll consider that
  * Noam: Happy to discuss offline too
  * ... When I see shadows, I feel disoriented
  * Ming-Ying: Take feedback and see if we can achieve our goal
  * ... Don't want the regular Request here
  * Fergal: Maybe not a subclass but an alternative
  * Nic: started playing with PendingBeacon, which seemed ergonomic
    enough, this one is also fine
  * … Any differences in functionality?
  * Fergal: achieves all the same things
  * Ming-Ying: +custom headers, from fetch() integration
  * Noam: Any use case for using GET with this? Of doing something with
    the response?
  * Fergal: This is for things that shouldn’t have a response. But if we
    are going to resolve the response when the fetch happened, there’s a
    question of what the response should be. Empty seems more consistent
  * Ming-Ying: didn’t previously consider that approach it looks like
    we’d get a response. Need to make it clear in the spec that using
    that won’t get you a promise
  * Fergal: Either never resolve or always resolve to empty. Unless
    there’s a good reason to do otherwise
  * Nic: the timeout parameter sounds like it’s not brought forward and
    handled in userland?
  * Ming-Ying: Can be done with setTimeout
  * Nic: Looked like you could update the body of the object and that
    was a stream. Does that have a benefit over the older method? Does
    the stream allow us to do compression?
  * Ming-Ying: Should support the same API as the old PendingBeacon, a
    stream may not be appropriate
  * Alex: Also had a question about the readable stream - how can we
    enforce a 64KB limit of it’s a readable stream, unless we read the
    stream right away
  * Alex: Weird, usually it’s read as its sent
  * Alex:  May say that if it’s a readable stream, we have to read it
    right away
  * Ming-Ying: maybe it shouldn’t be a readable stream
  * Fergal: Limits are an interesting question. How do we handle this?
    Can easily reach 64KB, but don’t want people racing into it.
  * … Should we flush beacons early if we’re reaching the limit? This is
    tricky
  * Nic: Where should additional feedback go?
  * Fergal: We can create an issue
  * ...https://github.com/WICG/pending-beacon
    <https://www.google.com/url?q=https://github.com/WICG/pending-beacon&sa=D&source=editors&ust=1679063198854486&usg=AOvVaw0sifOy58PLheJl04tcTiLR>
  * Ming-Ying: comment on proposal


      LCP Entropy

  * Ian: Looking at bpp for LCP consideration
  * ... Ran an experiment at Chrome stable and found that the number of
    origins that had good LCP changed in less than 1%
  * … So want to start excluding LCP that’s less than 0.2 bpp
  * Nic: Are you publishing this?
  * Ian: Working on it

- Nic
https://nicj.net/
@NicJ

On 3/15/2023 12:52 PM, Nic Jansma wrote:
> Hi everyone!
>
> On the agenda 
> <https://docs.google.com/document/d/10dz_7QM5XCNsGeI63R864lF9gFqlqQD37B4q8Q46LMM/edit?pli=1#heading=h.osvewfb7hvdz> 
> for our next call (Mar 16th @ 7am PT / 10am ET) we will discuss:
>
>   * PendingBeacon design questions
>   * AsyncContext
>
> Please note the earlier time-slot to accommodate our speakers.
>
> *<https://github.com/w3c/resource-timing/issues/304>* If you have 
> additional items, please add them to the agenda 
> <https://docs.google.com/document/d/10dz_7QM5XCNsGeI63R864lF9gFqlqQD37B4q8Q46LMM/edit?pli=1#heading=h.osvewfb7hvdz>.
>
> Join us <https://meet.google.com/agz-fbji-spp>!
> - Nic https://nicj.net/ @NicJ

Received on Friday, 17 March 2023 13:28:06 UTC