Re: WebPerfWG call - Dec 8th @ 8am PST

Minutes are now available:

     Linked to from our WebPerf WG Agenda document 
<https://docs.google.com/document/d/10dz_7QM5XCNsGeI63R864lF9gFqlqQD37B4q8Q46LMM/edit#>
     Published to the web-performance Github meetings page 
<https://w3c.github.io/web-performance/meetings/>
     ... and copied below:

Participants

  * Nic Jansma, Yoav Weiss, Philip Tellis, Mike Henniger, Noam Helfman,
    Tim Kadlec, Amiya Gupta, Carine Bournez, Sia Karamalegos, Aoyuan
    Zuo, Jeremy Spurlin, Katie Sylor-Miller, Sean Feng, William Liu,
    Abin Paul, Lucas Pardue, Patrick Meenan, Timo Tijhof, Rafael Lebre,
    Giacomo Zecchini, Steven Bougon, Barry Pollard, Benjamin De Kosnik


    Admin

  * Next meeting: January 5th, 2023 11am EST / 8am PST


    Minutes


      LCP mouseover heuristics
      <https://www.google.com/url?q=https://github.com/w3c/largest-contentful-paint/issues/108&sa=D&source=editors&ust=1671475061028130&usg=AOvVaw14hPq7b2zjVsK5t8P4Gsmg>

  * Yoav: A lot of ecommerce sites are using Zoom widgets that load
    larger images than any images before
  * … These are being counted for LCP
  * … At the same time, these widgets are being used for interaction,
    that we didn’t initially anticipate
  * … Triggers extra image load that is counted as LCP
  * … In Chromium, I’m experimenting with a heuristic that follows that
    path and if we have an image that’s loaded shortly after a
    mouse-over is triggered, that image is discounted for LCP purposes
  * … Improve that implementation to use Task Attribution that we talked
    about before
  * … Detection is time-based right now
  * … Tracking the causality chain would be better
  * … Solicit feedback on that approach, or alternative approaches
  * … Algorithm isn’t specified yet but would be
  * Katie: Not sure where we ended up, we experimented with hover
    causing a video to autoplay, hadn’t thought about this before in
    terms of LCP
  * … Might be worth adding video as well as images to the heuristic
  * Yoav: Great point
  * … We’re also thinking about videos in terms of LCP and first frame
    count as LCP
  * Katie: Not sure if experimentation is live on the site or it failed
  * … But initial implementation loaded static image first, then loaded
    background video and on-hover would switch between image and video
    (instead of poster)
  * Yoav: A sample of that would be awesome
  * Barry: You mentioned it’s onload, best practice would be to preload
  * … What if it’s not an image but a huge div?
  * Yoav: Current implementation is if you have an image that’s added to
    the DOM, looks at HTML element or background image, at the point
    where they’re added to DOM (not when they’re loaded from the
    network).  Preload is still counted.
  * Nic: triggered from the mouseover. Does the mouseover need to be
    related to the LCP?
  * Yoav: Mouseover over current LCP candidate.
  * … Hoping to tighten it further with Task Attribution
  * Timo: We want to avoid a situation where performance is artificially
    reported as better just because they moved their mouse.  Link to
    find that the hover intentionally loaded something more
  * … Can see a scenario where depending on how the site is built
  * Yoav: Hard to measure intention, but causality is something we can
    get in place
  * … Right now those LCPs, those user interactions are causing later
    larger LCP.  Time where it happens is when the user decides to move
    their mouse
  * … Significant bi-modality in sites where this is happening
  * … Goal is to reduce noise, not increase it
  * Amiya: You could imagine a hover, you’re hovering something all the
    time.  You’re reading an article and an ad loads, causing a shift,
    and it’s not related
  * … Another question is around future loaded resources, if they’re
    other unrelated resources are they also excluded?
  * Yoav: Right now it’s correlated just on time, later with Task
    Attribution
  * … One complexity with Task Attribution is that it would make a full
    LCP implementation dependent on TA, which is not yet specified (and
    only implemented in Chromium)
  * … Hoping to fix the non-specified part
  * … Have the spec have either time-based correlation or causation
  * … Implementations take up what they can from it
  * Michal: I want to clarify the proposal, if a mouseover is over “an”
    LCP candidate, is that only the latest LCP or any element that could
    have been LCP
  * Yoav: Current implementation is any LCP candidate on the page
  * Michal: Which would change depending on the order of how things load
  * Yoav: Potentially
  * Michal: I originally understood it backwards. If a LCP candidate is
    hovered over, and it gets rendered somehow, it would be discounted
  * … There is a LCP candidate, you hover, ignore LCP from that
  * Yoav: Currently it’s any LCP candidate and I could trim down to the
    latest LCP candidate based on feedback
  * Michal: What are these hover widgets and are the current LCP, or are
    they a zoomed out image and they weren’t the LCP
  * … Makes sense a user-defined duration from first hover shouldn’t be
    counted as LCP
  * Barry: If you look at Amazon for example, you can hover over current
    product image
  * … Or if you hover over other images, they focus, but LCP stays the same
  * Yoav: With Task Attribution in place, hovering over any image that
    triggers another image could be considered
  * Sia: I think the main product image can do this on many platforms
  * Katie: I've seen main product images have zoom on hover on other
    ecommerce sites
  * Sia: Example of changing the LCP image based on thumbnail
    hoverhttps://www.homedepot.com/p/RYOBI-ONE-18V-Cordless-High-Pressure-Inflator-with-Digital-Gauge-Tool-Only-P737D/307627867
    <https://www.google.com/url?q=https://www.homedepot.com/p/RYOBI-ONE-18V-Cordless-High-Pressure-Inflator-with-Digital-Gauge-Tool-Only-P737D/307627867&sa=D&source=editors&ust=1671475061032434&usg=AOvVaw0PSxrvcP5BwrTzYh9B32IR>
  * Yoav: Thanks for the feedback!


      Extending the NavigationTimingType · Issue #184 ·
      w3c/navigation-timing · GitHub
      <https://www.google.com/url?q=https://github.com/w3c/navigation-timing/issues/184&sa=D&source=editors&ust=1671475061032925&usg=AOvVaw0V_h2N9VNJoF10G0mw4xus>

  * Barry: Few use cases where navigation types aren’t covered
  * … Using web-vitals, some of these navigation types give different
    experiences and we want to measure differently
  * … Restore - can be detected document.wasDiscarded, should we add a
    “restore” navigation type rather than use the original type
  * … Similarly BFCache, different from back_forward based on persisted
    flag on the pageshow event
  * … Want to implore RUM providers to segregate data into these buckets
  * … Prerender navigation type – based on link rel=prerender.
      Technically if we said they were prerender that would not be part
    of the spec.
  * … Should we update NavType
  * Yoav: I think we had this discussion for prerender, a few months
    ago.  Conclusion we ended up with is fact that page was prerendered
    is orthogonal to navtype.  You could have Reload that was
    prerendered, or history navigation that was prerendered.  Maybe less
    for Reloaded one but it was a bit orthogonal to navigationType.  So
    it should be set on a different bit.
  * … Looks a lot like the wasDiscarded bit
  * Barry: A prerender is always a navigation, you can’t have a Reload
    Prerender
  * Yoav: You could have a Prerender page that is a history navigation?
  * ... Not BFCache but a History navigation a Prerender
  * ... I can dig up that discussion and link to the issue
  * Nic: From a RUM perspective, we’re looking at data points, and will
    display this data as distinct types (e.g. BFCache is different). For
    everything that’s very different from other navigation types, it’d
    be good to have it a separate navigation type
  * Barry: Restore is very similar to reload in many ways
  * Yoav: but dissimilar to force-reload, where we don’t have a type
  * Barry: Add that to the issue
  * Amiya: in the prerender case, you have 2 states - page prerendered
    and activated. From a measurement perspective, the activation is
    when it matters.
  *   … Is “prerender” enough or do you want to signify more states
  * Barry: Tell devs to use the activation start. We have the prerender
    state and we don’t use it. I wonder why
  * Philip: Is it still set in the visibility state?
  * Yoav: I think it was removed at some point.
  * Philip: Move from prerendered to visible is when we consider the
    page was activated
  * Barry: I believe that changed
  * … Comment on the issue. I’ll follow up on the prerendered state


      Body size from TAO =>CORS
      <https://www.google.com/url?q=https://github.com/whatwg/fetch/pull/1556&sa=D&source=editors&ust=1671475061035047&usg=AOvVaw1ENb80I41-WEcskoxCD2xo>

  * Yoav: We’ve been talking about body sizes being more meaningful
    regarding the resource itself rather than timing
  * … When we expanded TAO semantics to also cover body size, it was a
    mistake
  * … Want to make that restriction be based on CORS rather than TAO
  * … Based on data from HTTP archive, if we made the switch today to
    only expose this data on CORS resources rather than TAO, the main
    resources where we’d take the hit and we’d have less data would be
    CSS resources and images
  * … For everything else we have more CORS than TAO
  * … Before this call I realized the sheet with all data analysis is
    not public but I can take an AI to make it public

  * Post-meeting note: Now it is!Sheet
    <https://www.google.com/url?q=https://docs.google.com/spreadsheets/d/1dMY6kf3BkTwN4yeOMmq4HDhRjXMEwJBh69ezK-fIcdE/edit?usp%3Dsharing&sa=D&source=editors&ust=1671475061035941&usg=AOvVaw1kHufFsh7P0BrAt4QYZ3Br>,query
    <https://www.google.com/url?q=https://console.cloud.google.com/bigquery?sq%3D328831863138:b357f60f57d64e0a9a0260b872e54fc7&sa=D&source=editors&ust=1671475061036218&usg=AOvVaw0AEMjrANKau1cJo-zddIVf>

  * … If we make this move from TAO to CORS, beneficial in long run
  * … Rolling it out could result in some data gaps
  * … One way to resolve that is to expose Body Size for both CORS and
    TAO for some amount of time
  * … And then remove TAO bits later
  * … Public resource proposal at TPAC would be able to declare resource
    is public from server side without changing the way it’s loaded
  * … Presumably a lot of images would be served over image CDNs,
    relatively easy to declare them as public alongside TAO declaration
  * … Then we’d be able to have a path to migrate away from TAO to CORS
    or CORS-safe responses
  * … That seems like a reasonable path
  * … Wondering about your thoughts
  * Nic: Have to think through it to understand the effect of the data.
    The transition plan would make 2 transition points rather than one -
    get more then get less.
  * … could cause more confusion and more communication cycles
  * … Once you’re able to share the data, that would help
  * Yoav: AI to get the data and query shared
  * Nic: Besides images and CSS, this would open more measurement, which
    is great!


      Specify the behaviors of requestStart and responseStart for
      Prefetch
      <https://www.google.com/url?q=https://github.com/w3c/resource-timing/issues/360&sa=D&source=editors&ust=1671475061037429&usg=AOvVaw253kV1hTg4TGrTgHluQmVm>

  * William: We have prefetches for resources ahead of navigation
  * … We detect once the user is about to navigate, we load the response
    ahead of time
  * … Problem is we have some negative values because network requests
    happen X amount of time before navigation
  * … Should we preserve negative values, change them to Zero, or treat
    them as a HTTP cache case, so we preserve ordering of individual events
  * … Two ways of looking at this, one is treating it similar to HTTP cache
  * … We’re not hitting network stack
  * … Else we preserve the negative values and keep all the metrics in there
  * … Could affect users that are looking at timestamp
  * Nic: Negative timestamps would affect the currently deployed RUM
    library we have. The pipeline would discard such timestamps as
    invalid. Other libraries are probably doing similar things, and
    would need to adjust
  * … otherwise, treating it as a cached case kinda makes sense. Would
    be great to know that the resource was a prefetch still
  * William: There’s an attribute on ResourceTiming that would reveal that
  * Nic: the most compatible thing then would be to not have negative
    timestamps and we’d capture this flag
  * Barry: What’s the benefit of going negative? Is there a use case?
  * William: That’s our question to you.
  * Barry: Don’t see major benefits. Prefetch shouldn’t be special
  * … Would there be gaps, if you prefetch and don’t use the resource
  * William: No real benefit to measure the gap, because it’d measure
    user behavior
  * Nic: Would exposing the negative timestamps expose private information?
  * William: It could be a privacy leak indeed
  * Katie:If we don’t expose negative values, would the information
    about connection and DNS timing would be gone and invisible? Or
    would there be another way to access that information?
  * … If we go negative, is the information gone?
  * … Agree that negative values would blow up pipelines. They’d be bad,
    but if they are the only way to expose the information, maybe?
  * Nic: If something is prefetched, and we don’t do negative
    timestamps, I’d expect DNS and connection timestamps to all be the
    same number + deliveryType attribute. Is that what happens?
  * William: Not exactly the current behavior. The DNS and TCP values
    are zero, thus all set to fetchStart when exposed; requestStart and
    responseStart are negative values but exposed as zeros, as they
    refer to when the resource was used.
  * Yoav: you’d lose the info but does it matter?
  * William:We also lose information for the HTTP cache case. Does/did
    it matter?
  * Nic: We treat cache hits differently, and display them differently.
  * Timo: Redirect time start was also a similar issue. Service worker
    requests are also similar when responding synthetically
  * … Comes down to the intent of the developer measuring it: do you
    want to measure the user experience or what happened behind the scenes
  * … In Wikipedia we measure what’s presented to the user, but may want
    DNS timing as a separate entry
  *   Yoav: Regarding DNS we had a discussion at TPAC around aggregating
    those times and reporting in aggregate
  * … Could be delivered as part of a different channel
  * William: consensus to treat it as an HTTP cache hit?
  * <Everyone nod>
  * Nic: Very similar to the HTTP cache case as it happens outside the
    time frame. It’s different because in the HTTP cache you can have
    one hit amortized across many resources. Here you have a single
    resource load that happened in the previous navigation related to
    this one.
  * … But in my mind it still makes sense to treat it as a cache hit


      https://github.com/WICG/pending-beacon/issues/56
      <https://www.google.com/url?q=https://github.com/WICG/pending-beacon/issues/56&sa=D&source=editors&ust=1671475061040515&usg=AOvVaw0AcBB3Mx4q8SSQPGfuwn9D>

  * Yoav: Ran into a request for Beacon API to enable Compression
  * … Putting a lot of hope into Pending Beacon, I think it makes sense
    to think about compression in that context
  * … A few approaches we could take here
  * … Can take in a readable stream
  * … One option: Use CompressionStreams API and compress before it’s
    loaded up on the beacon
  * … Other option: Browser-side compression mechanism before beacon is
    sent, based on a request provided to the API
  * … Application commits to server side able to process that
  * … Since there’s no content negotiation for request compression
  * … Don’t want to handle compression on transport layer
  * … Wondering regarding thoughts on both those fronts
  * … POST-level compression may not be CORS-safe which may trigger
    preflight
  * Nic: Like the idea of the browser handling it. Would imagine some
    use cases to queue data, replace it, etc. Precompress in that case
    would not be efficient
  * … Browser magic is better.
  * Katie: Right now send a lot of little beacons. Ability to let it sit
    and wait and trust the browser, the payload sizes would get larger
  * … If the browser compresses, that can increase the timeouts



- Nic
https://nicj.net/
@NicJ

On 12/7/2022 11:18 PM, Yoav Weiss wrote:
> Hey folks,
>
> Apologies for the late reminder, but we'd be gathering 
> <https://meet.google.com/agz-fbji-spp?authuser=0&hs=122> 
> tomorrow/today to talk webperf!
> On the agenda 
> <https://docs.google.com/document/d/10dz_7QM5XCNsGeI63R864lF9gFqlqQD37B4q8Q46LMM/edit?pli=1#heading=h.l93t69cwdo11> 
> we have discussions on mouseover heuristics for LCP, moving body size 
> to be CORS-protected rather than TAO-protected, isTrusted scrolling, 
> and subframe timing.
>
> Hope to see y'all there!!
>
> Cheers :)
> Yoav

Received on Monday, 19 December 2022 17:41:02 UTC