[Minutes] WebPerf WG TPAC F2F Meeting 06 Nov 2017

Hi,

The minutes from the 06 Nov 2017 WebPerf WG TPAC F2F Meeting are 
available at:
   https://www.w3.org/2017/11/06-webperf-minutes.html
and
   http://bit.ly/webperf-tpac17

also in plain text below:

--------------------

WebPerf WG TPAC F2F Meeting

06 Nov 2017

Agenda

Attendees

Present
Ryosuke, Alex, Ilya, Tim, Shubhie, Addy, Philip, Yoav, Charles, Nathan, 
Vladan, Qingqian, Todd, Nolan, Xiaoqian, Boris, Smaug, Jungkees, Ojan, 
Dominic, Benjamin, Guido, Harald, Fadi, Josh
Regrets
Chair
Todd, Ilya
Scribe
Ilya
Contents

Topics
Long Tasks (Shubhie)
Time to Interactive (Tim)
Lifecycle (Shubhie)
Paint API (Shubhie)
Device Info (Shubhie, Nate)
User Timing L3 (Tim, Nolan, Phillip)
Server Timing (Charles)
Priority Hints (Yoav, Addy)
Content Policies (Josh)
setImmediate (Todd)
Summary of Action Items
Summary of Resolutions
<scribe> scribe: Ilya
Long Tasks (Shubhie)

Updates on Long Tasks 
https://docs.google.com/presentation/d/1OAQMgmnLo03O9BUr-mPNeA_Py0QQNuLt1n88cxCgfYs/edit#slide=id.p

Shubhie: shipped in M58, ~1.2% adoption in Chrome
... goal was to

Nic: using LT as input for TTI
... customers excited about LT data
... don’t have customers looking at the raw data, hoping to build some 
real-time dashboards to figure out what’s causing long-task pain
... trying to figure out how to present it to make the data actionable
... need to build systems to analyze the data
... script URL and JS line number is very interesting (in v2)
... right now you have to play games to figure out what’s causing pain

Shubhie: sounds like script URL is the missing bit

Nic: yes

Nate: ditto, much more useful once we have script URL and line no
... we’re using it to validate existing tooling

Todd: FB already instrumented the entire product, so there isn’t a lot 
of new
... whereas the API aims at a case where you can’t insturment everything 
on the page

Nate: to the extent that we can instrument things we can’t get already, 
it’s more interesting to FB

Nic: providing “total CPU” might also be useful

Shubhie: v2
... proposing to add taskType to help explain different types
... also scriptUrl to indicate culprit
... also extending TaskAttributingTiming
... implemented script compile and script execute, script function-call 
requires sampling thread
... script parse is important, captured in script-compile
... Long tasks is script oriented, because premise was that scripts are 
the main culprint
... consumers also expressed interest in delayed events (input) and 
rendering
... also animation jank
... LT doesn’t address the full scope, we need additional APIs
... Delayed events > Event Timing API
... Delayed rendering > Slow Frames API
... we don’t need or want one uber API, but it would be nice to tie 
these together

Ilya: feedback from other implementers?

Todd: LT is behind PO for us.. We need to get that implement that

Ryosuke: not clear on the concrete use cases
... animation jank is understandable, but delayed events

Yoav: the main use case is to identify when the main thread is busy and 
being able to detect that and analytics providers

Tim: a long task that doesn’t block input or rendering may be a no-op 
but it’s a risk that we want to highlight to developers

Ryosuke: you need attribution to make this actionable

Ilya: what’s missing in current attribution?

Ryosuke: a sampling thread might be too costly?
... the use cases are not strong enough to justify the overhead?

Shubhie: we don’t need full on profiler, we can make this actionable 
without…

Ryosuke: has anyone used this feature to fix stuff?

Shubhie: we have SOASTA experimenting

Nate: we find problems all the time via our instrumentation

Shubhie: currently the API doesn’t do full-on profiling and sounds like 
Ryosuke is skeptical

Ben: we have a bug for it in FF, nobody assigned to it yet but no 
objections

Todd: asked around internal MSFT teams, everyone was interested in LT 
but so far everyone ended up instrumenting own code similar to FB

Yoav: we’re not exposing the data in LT, planning to expose in few 
months

Ryosuke: really want to see some customer examples of them using this 
API solving problems with this API

Shubhie: we have Ads team experimenting with LT, need to check in on 
status

Ryosuke: also not yet clear if we’re better off with separate APIs or 
one to tackle event timing and other bits
... the difference here is how we aggregate and present the data, it can 
be hard to piece together this data when you have separate APIs
... separately, we might also want to expose processing times for 
resources

Yoav: this is probably separate from LT

Shubhie: yeah we talked about ability to link expensive resources back 
to RT

Todd: not just JS tasks?
... high-level feedback is that JS is not the only culprit, maybe in 
hindsight “Long Script”

Shubhie: right, initial intention was to tackle all tasks

Ryosuke: e.g. CSS parse

N8s: JS in JS profiling

Vladan: … experiment, we add a preamble to every JS
... worker turns on flag every ~10ms and pulls stacks to worker thread
... we found it works pretty well
... size overhead is ~30-50kb, which is acceptable
... loading overhead is ~10ms, also acceptable
... aggregation is on the server-side
... we found the stacks to be extremely useful
... very surprised that the overhead is so low
... we’d love to have a native API to capture stacks
... right now we’re limited to 10-deep on stacks, would like to have 
more
... also, no visibility on DOM operations

Nate: we made it work with relatively low overhead

Ryosuke: low overhead is relative, we reject patches today that regress 
perf by 5%

Vladan: we don’t need to profile every page

Tim: what is overhead when profiling is enabled?

Vladan: 30ms increase in “display done” (entire page painted)

Addy: which devices?

Vladan: we’re targeting desktop devices; have not looked at different UA 
yet

Shubhie: there is an interesting strategy here for LT, we could sample 
when you have long task (e.g. when you have 200ms)

Ryosuke: except, we have to disable a bunch of optimizations.. E.g. 
inlining has to be disabled.

Vladan: what about error.stack work now in JSC?

Rysouke: I think it’s same problem and we have to undo some 
optimizations

Todd: our engine can inline and get JS stacks, not sure about other 
stacks

Nate: we plan/hope to open source some of this work in the future

Shubhie: any other feedback from customers?

Yoav: attributing and event timing jank

Nic: we haven’t investigated Event Timing yet
... for LT we’re still in early stages, don’t have customer feedback yet
... today they’re consuming it as input to TTI metrics
... they have access to raw data, but don’t think they’re looking at it 
yet
... hoping to have feedback in a few months

Time to Interactive (Tim)

Updates on TTI 
https://docs.google.com/presentation/d/1EnbHI5UlG8qgMq0gvTJOxICc4lH5A95Ud85kwze0ogc/edit#slide=id.p

Tim: goal for this discussion is to agree on general direction; not 
design the API.. yet.
... sites optimize for visibility at expsense of interactivity
... e.g. airbnb, amazon
... AirBNB: time to “AirBNB interactive” is 7s before the actual 
interactive
... the Google polyfill might be pessimistic, but still a lot more 
accurate

Todd: we observed same pattern across MSFT sites

Nolan: ditto, we find sites firing TTI in the middle of a long JS task

Tim: current implementation..
... lower bound: right now using FMP, but need different one
... quiescent window: with no >50ms long tasks, and conatains 5s with 
maximum 2 req
... network quiet window:
... when we first implemented we didn’t pay attention to it
... but we found on lower end devices with terrible connections
... the network is so slow that we’d claim we’re interactive and then 
jank comes later

Ryosuke: how do you know which resources are important?

Tim: we don’t, we assume if 2+ are outstanding then at least one of them 
is important
... other alternative lower bound: can’t use FMP, FCP or DCL is possible 
but looks like we fire TTI a little too early

Ben: based on own experience with perf reports, site custom metrics make 
it harder to grok where the issue is

Ryosuke: different sites have different requirements for what’s 
important

Tim: why a loading metric?
... we want something to be lab measurable, as well as RUM accessible
... encourages developers to break up their scripts

Yoav: you can’t really guarantee “once loads, feels great”

Tim: lets talk about arbitrary thresholds
... some folks found 5s unusable and switched to 500ms
... also, hard to agree on when page is “loaded”
... as one idea, we could make all the parameters configurable
... e.g. when to start looking, what the lower bound should be
... we have a polyfill on Github, there are two issues with it
... 1) can’t observe network requests
... 2) FCP is not quite good enough
... ideally we need something to enable developers observe network 
requests

Ben: what about cases where you’re spammping with postMessage

Tim: you could have 50ms of postMessage events, but browsers should 
prioritize user input

Ryosuke: we don’t do that, sadly, would like to

Ben: not sure if Chromium does it either, we should think about event 
storms

Ojan: the scale of what we see on the web today
... it’s very common to have 5s+ of script during page
... we’re at a point that we need to put out fires

Ben: sure, we might make it explicit that we don’t address this

Todd: also want to make sure that we spec behaviors that work for all 
implementations

Ryosuke: also need to think about gaming - e.g. shuffle the tasks

Tim: it is gameable, if you’re willing to punt all of your work past the 
window
... the math changes if we change from 5s to 500ms

Todd: the heuristic was chosen to incentivize loading behaviors?
... there is lots of things about this score that feel a bit messy, what 
other definitions have we considered?

Tim: we tried many approaches of loading window of interactivity
... we looked at changing impact of loading tasks as time progresses

Todd: is time to interactivity the best way to measure interactivity?
... e.g. users expect all sites to be interactive within X
... we could compute fraction of non-interactive time

Tim: but non-interactive at different times have different impact

Ben: if you have a non-load metric, it feels like you could define a 
loading version

Todd: time to interactive is measuring the 5s window
... but the concept of interactivity time is more generalized

Ben: right, developers want to measure interactivity more broadly

Ryosuke: on some sites, not being interactive is OK because you just 
want to scroll

Tim: we heard this feedback, but at some point the user will still want 
to tap

Ryosuke: biasing toward visibile+scrollable can be a good tradeoff for 
some sites

Ojan: the common pattern of visible content before interactive is often 
amplified on 3G or slow devices

Qingqian: different sites have different definitions of TTI

Nate: we have our own definition of TTI; allowing developers to know if 
what we called done vs. ongoing interactive would be useful — e.g. we 
try to chunk up loads and keep interactive
... our current TTI is our ~display done: if it’s visually complete we 
want it to be interactive
... BigPipe ensures that we don’t flush any DOM until script is there
... after we say “done” we still load code for a while
... we would fail the current definition of TTI

Ryosuke: something like an interactivity score (% of time) could be very 
useful

Tim: estimated event queuing time: what is the estimated amount of time 
an event would be waiting, could be built on top of long tasks

Ryosuke: right, this is getting into queuing theory

Ojan: another piece of anecdata
... we looked at user annoyance with bad performance
... we had ads with 10s loops, and we couldn’t detect statistically 
meaningful annoyance
... but we followed up with users, “oh, it’s just how the web works”
... it’s not that perf doesn’t matter, it’s the perception
... the house is on fire, we need to do something about this
... the degree that we see this during load is much more pronounced
... I feel that this is an urgent problem that we should prioritize for 
developers

Qingqian: can we measure delay time?

Tim: this is possible, I built a polyfill and it was gnarly to get 
right, lots of things to instrument

Todd: ok, what feedback are we looking for here?

Nic: some feedback from our experience
... we built a polyfill in Boomerang (open source) project
... very similar to what Tim described
... however, a few differences in definition because as a RUM script we 
have to be x-UA
... we have dozens of customers using the polyfill
... all interested in TTI metric itself; some have been asking us for 
this for a while
... we can’t rely on LT and we’re polling rAF and setTimeout to 
approximate LT
... this adds overhead but seems to be reasonable so far
... one big problem is how long we’re willing to wait to get this data
... the goal of our analytics is to beacon out the data as soon as 
possible
... people close browsers, go to homescreen, etc.
... we need to change thresholds, and we allow customization
... many customers use 500ms or 1000ms as the interactive window
... if we’re building this as an API it would be nice to allow 
customization
... also, we’re not paying attention to the network
... we don’t have complete visibility into the network activity (not a 
new challenge)
... having an API for monitoring activity (RT only notifies you at the 
end) would be nice
... also, not entirely convinced that network activity would have 
meaningful impact on TTI
... you could have a dozen images being loaded and that shouldn’t have 
impact on interactivity
... our algorithm has a lower bound: we’re using FCP or DCL as lower 
bound
... we also allow sites to customize lower bound — e.g. hero images or 
framework ready
... this allows sites to signal when they believe they should be 
interactivity (e.g. framework loaded and all event handlers registered)

Yoav: many of the polyfill challenges wouldn’t be an issue for browser 
implementation
... the timeout is one (e.g. lowering to 500ms) issue

Philip: have we considered remeasuring after the window?

Yoav: we looked at this, but we need “deadmans beacon” that could be 
queued up when the page is unloading

Todd: most sites want metrics that are in the browser, getting them to 
instrument is really hard

Ojan: as a general direction, does this look right?

Ryosuke: we are interested in a good loading metric

Nic: another factor to consider is if the user has actually interacted, 
processing input can trigger its own waterfall of work

Yoav: choose your own adventure doesn’t make the metrics transferrable 
between sites

Ryosuke: right, you can tweak the metrics to make TTI look good

Todd: the idea of pushing interactivity earlier in the page load is a 
solid goal
... I’m not yet clear about the current heuristics
... I’m still wondering if interactive time is a more generalizable 
concept
... we can still define TTI on top of it

Ben: ditto, feels very fragile today, would like to see how this 
correlates to other bits in the platform

Tim: if we demonstrate TTI polyfill correlation with business metrics, 
is that what we need?

Ben: no, I believe there is a business case, but I want to make sure 
that it’s generalizable across browser engines (fewer heuristics) and 
different sites

Ojan: confused by Chrome scheduler
... the fundamental problem is that long JS tasks can’t be broken up

Ben: right now it’s very heuristic driven, I agree the house is on fire 
and we should fix this

Todd: if we define some periods of interactivity, we could then define 
an interoperable defn that works across different engines.

Shubhie: exposing requests as they start

Yoav: I’m wondering if we ought to tackle this via Resource Timing

Tim: our proposal was very similar, with one entry at the start and at 
the end

Nate: FetchObserver is less complicated

Todd: interested folks to drive this?
... Nic, Tim, Yoav

Lifecycle (Shubhie)

Updated proposal 
https://github.com/spanicker/web-lifecycle/blob/master/README.md

Shubhie: two high level issues
... memory for high tab usage
... second is responsivenes on mobile
... lack of incentive for developers: why should you release resources, 
etc.
... tl;dr: limited resources causing bad user experiences
... we don’t have good APIs to let developers know when to run the right 
tasks
... e.g. system-initiated termination, apps in background can be stopped 
& terminated
... we want to deliver callbacks and let developers know to hook in to 
do right work at right times
... we still need to enable ligitimate background work, need to provide 
some opt-outs until we have right APIs in places
... we made some recent updates to accommodate use cases around pausing 
frames, the idea is to be able to stop misbehaving frames — we’ll talk 
more about this later
... STOPPED: typically hidden frames will be stopped
... DISCARDED: typically frames will be moved to discarded.
... lots of confusion about nomenclature in this space
... system exit (intervention): tab discarding, no guaranteed callback
... user exit: user closes tab, browser should guarantee that one 
callback will fire and finish
... unexpected exit: crashes, no guarantees

Dominic: is passive possible?

Ojan: yes, on most mobile

Alex: with unload handlers we often have cases where the page executes 
costly scripts
... could we consider killing such activity after some period of time

Todd: some UAs already impose such things, as part of this work we could 
try to formalize some of the definitions

Shubhie: the suggestion is that we shoot for at least one event, but we 
can’t guarantee that all of the events fire

Todd: worth taking as an open issue.. we should document the best 
practice for how to tackle this today.

Ben: a few browsers implement BF cache, would we generate onstop?

Shubhie: initially we said yes, but current thinking is “no” because 
lots of complications

Ben: would love to see this working with BF cache

Ojan: pagehide/pageshow didn’t work well with this model
... our goal is enable developers to save state if the user comes back
... work in BF cache doesn’t directly map to this because browser saves 
much of that

Ryosuke: we have page cache but we can clear it at any point
... proposed transition would still be useful

Dominic: do you want different code in pagehide/show vs onstop?
... to me that should be deciding factor

[Web Assembly WG folks joined]

Shubhie: not sure how this maps to bf cache, need to dig into use cases 
more

Ben: in gecko if you have an ongoing transaction we will discard the tab
... when you hit the back button, the more we can restore state the 
better

Ryosuke: for us the biggest problem, we prune the cache under memory 
pressure

Shubhie: reasons against pageshow/hide
... doesn’t fire when you go to background, instead fires on load
... frame could be visible and we may want to stop it (not mapping to 
Page visibility)

Guido: How much time do we give to these callbacks (e.g onstop)

Shubhie: we need to have restrictions and capabilities
... 5s is too long, need to collect more data
... maybe we don’t allow network or other APIs
... capability: waitUntil to do async work w/ IndexDB writes, etc

Guido: what would you do when this expires?

Dominic: write to DB, then do X
... browser work we don’t cut off (e.g. db write), but if allotted time 
has expired maybe we don’t run the other handlers

Todd: two options: you could be killed, or you could be suspended

Ojan: not as concerned about this, we could let the task finish and not 
run the next task
... in practice, we could probably leave this (discarding vs stopping) 
to the UA

JungkeeS: In SW we do something similar; we have waitUntil and we still 
have a hard limit

Ben: what else do we restrict?
... if you’re in the middle of waitUntil and user refocuses the tab, how 
do you handle?
... do you wait for the onstop to complete vs..

Qingqian: what if tabs are scripting each other?
... windows sharing event loops

Ben: that’s a great question of how to handle cross-global accesses?
... these things show up often in bug reports

Shubhie: for stopping, our initial implementation is to fire same event 
across all
... for pause frame things get more complicated because we’re 
intentionally getting them out of sync

Todd: browsers that suspend tabs already have solutions for some of 
these, but not necessarily specced
... are SW’s referenced at all here?

Shubhie: we want to keep workers in sync
... for sharedworker we’re disconnected

Ben: going back to BF cache model, you get disconnected

Nate: do SW’s have a way to observe it, should it?
... SW might want to persist state

Shubhie: the issue here is we’re back to running more script under 
pressure

Ben: some developers have tried to build instrumentation on top of SW to 
figure out which clients they’re controlling, except today there is no 
way to detect when it goes away
... maybe there is something here that could be useful
... we have existing spec issue because of BF cache, it would be nice to 
spec this in a way that allows us to tackle both bf cache use case and 
what we’re talking about here

Boris: where should developers be saving state?

Shubhie: in pagevisibility == hidden

Boris: so the use cases for stopped are?

Shubhie: if the app wanted to continue to do background work, it can do 
the handoff

Boris: is this something they’d need to duplicate with unload?

Shubhie: no

Todd: on desktop where you have large N tabs, we want to pause
... today developers may rely on PV visibility

Boris: also, hidden can have input focus in some cases
... might want to capture in the diagram

Ojan: we’ll update the diagram, it doesn’t actually change any of the 
callbacks

Todd: let’s review the issues
... insuring ordering of events
... bf cache questions
... fixing up the state model: hidden and passive may not be separate 
states
... requirements on restrictions of onstop: what can be executed, etc; 
hard problem
... cross-global interactions
... behavior if its brought back to life when executing onstop
... worker management, in particular sharedworker management

Fadi: how many of these issues are new?

Ben: we tackled some of these in bf cache, but not well specified

Ryosuke: we see a lot of malicious actors that try to exploit callbacks

Paint API (Shubhie)

Shubhie: shipped in ~M61 → ~4% of pages
... some big libraries adopted it, AMP is using it on one of the primary 
metrics
... Google apps are picking it up now as well

Ilya: Mozilla has implementation behind a flag

Alex: not implemented in Safari

Ryosuke: one reason is still not clear what first paint means

Todd: old page is painting, need to clarify when the navigation
... we need to define precisely what will trigger new event
... e.g. fragment navigations don’t trigger

<scribe> ACTION: Ryosuke to file some bugs to clarify
Ben: could hook it up to environment settings object

Ryosuke: HTML spec has many concepts of navigate, we should pin it down
... what about offscreen content
... what about svg outside of viewbox
... what about canvas?
... do you take into account visibility:hidden?

Tim: can probably reword to canvas that’s not had any paint operations, 
or some such

Harald: we have an implementation that you expose to the pref
... not according to the spec, it doesn’t account for background
... we’re not far, and interested in improving paint based timings

Ben: what about tests?
... do you delay resources and test that paint delayed

Tim: yes, that’s roughly what we have in web platform tests today.

Todd: if you ctrl-click link, background edge cases..

Philip: was the load slow or just in the background?
... we should report only if visible?

Ryosuke: back-forward cache, what happens there?

Ben: if we base it on environment settings object then it shouldn’t

Device Info (Shubhie, Nate)

Shubhie: we shipped device memory, both JS and CH api
... the use case for CH version is to communicate low memory devices
... we had some concerns on fingerprinting from Boris
... we addressed that limiting the range and granularity

Fadi: we limit to 8GB, but renderers typically don’t use more than 4GB
... and 256mb lower bound is minimum footprint for a modern browser
... limited value set but should cover the key use cases

Ryosuke: for iOS devices the limits are always the same
... on desktop we may have different limits

Fadi: the intent here is to communicate device capability not exact 
amount of memory you can use

Ben: where is the difference between 4 vs 8GB

Ojan: if you’re building a game you can imagine customizing…

Todd: FB shared data before that indicated that memory is best indicator 
of device class

Nathan: you can get debugger info via WebGL context and get GPU info
... but that’s really janky and we’d like to have a better 
lower-overhead API

Alex: WebGPU group may be already working on something like this?

Nathan: CPU is the other missing bit, clock speed of current core.. 
Ideally.

Todd/Ryosuke: which core?

Nathan: current one…

Yoav: separate specs create some issues with cacheability
... but coarse type masks data, so not sure

Addy: why did we punt on device class?

Nate: class really depends on the use case

Todd: anyone using device memory?

Nate: yes, we’re looking at this data to identify critical path 
differences
... plan is to serve different content
... we also need client hints

Todd: users on desktop?

Nate: yep, that’s what we’re using this for
... critical path looks different for different devices
... we can already measure CPU and GPU info
... the users that benefit most are the ones that we hurt most to 
collect this data

Todd: the number of sites that spin up WebGL is very high just to 
capture this data

Yoav: analytics provides can also use this for splitting tests/perf data

Ryosuke: I’ve found number of cores to be a strong indicator of 
performance

Ben: I’m still concerned about tail end risk of exposing this data
... I argued for hardwareConcurrency to help figure out number of 
workers to run

Nate: ditto for GL contexts, 3D video, expensive JS, etc.

Shubhie: we heard similar concerns, to bring it back to where the value 
is
... are there particular signals that are more valuable than others?

Nate: GPU info is useful for different use cases
... it’s hard to give a prioritized shortlist
... if we’re only adding one more thing, GPU info as client hint 
probably

Harald: WegGL content is interesting because unlike images the hint may 
not be that useful

Ryosuke: I can see the WebGL signals exposed.. so many people are 
already using it.

Harald: we have some preventions in Firefox to prevent WebGL queries in 
sensitive contexts

Todd: FB examples are great, but it’d be great to see more sites 
benefitting from these APIs
... I’d love to see examples of sites where my users would get better

User Timing L3 (Tim, Nolan, Phillip)

User Timing L3 
https://docs.google.com/presentation/d/1d64Y4rtLCxobGgljVySU2CJpMPK5ksaiZuv3ka1dCVA/edit#slide=id.p

Tim: two enhacements we care about
... allow measurement between two timestamps
... enable reporting arbitrary metadata
... TTI polyfill for airbnb is an example, because it requires 
retroactive timestamps
... the idea is to…
... allow mark to add optional timestamp

Nolan: currently mark/measure return void, but devs want to often get 
the mark immediately

Ryosuke: there is compat risk with extending measure to accept timestamp
... is there any reason not to add new method?
... e.g. add one method for names vs. timestamps

Philip: it could simplify some things, but ergonomics is mixed: multiple 
methods to do similar thing

Todd: we could gather telemetry to figure out if we can get lucky on 
this one, or not

Tim: one way to add arbitrary metadata
... proposing adding a dictionary with detail, similar to custom event
... alternatively we could pass in a dictionary as second parameter
... dictionary?

Ben: prefer dictionary

Ryosuke: ditto

Tim: we’d still need to do compat investigation

Server Timing (Charles)

Charles: in latest iteration we’re naming parameters
... this is to allow the syntax to be extensible
... we have an implementation underway in CR
... DevTools implementation would need to be updated

Yoav: the feedback from the TAG was that we should think beyond duration
... hence updated to extensible syntax

Charles: duration and description are two optional params we recognize

Ryosuke: use cases beyond devtools?

Yoav: analytics providers to gather

Harald: adoption?

Charles: there’s uptick, will gather more adoption data

Alex: should we consider shorter names?

Todd: processing model, unclear on when it should run
... we should get better hooks

Ryosuke: we should probably drop the trailer bits

Priority Hints (Yoav, Addy)

Proposal 
https://docs.google.com/presentation/d/1uqBberwDm9qZUEU8KGiH_7g8zhXQSA9NxTwKwWTcqY0/edit#slide=id.p

Yoav: there is no way to communicate importance of resources
... browsers have heuristics based on document structure, type of 
resource, etc
... we want to enable developers to communicate priority via markup and 
script

Ben: could also be a signal into TTI heuristics? E.g. ignore low 
priority

Yoav: examples: hero images, non-critical fonts, critical async script
... the last example is what lots of folks use preload for today
... we want to enable early discovery of resources without them 
contending for resources against high priority resources
... ideally this should work via markup and fetch initiated requests
... e.g. in SW today it’s not possible to alter priority, you can only 
forward the priority
... proposal is to add an attribute to communicate importance
... with gotcha that we want to prevent footguns - e.g. someone 
downgrading critical script to low priority
... strawman: importance parameter
... <img src=foo importance=high>
... <iframe src=foo importance=unimportant>

Ben: default vs low, there is more complexity here to explore?

Yoav: we’re not talking about explicit priority, the goal is to 
communicate relative priority

Ben: ideally what we expose here should map to Fetch spec

Ryosuke: skeptical that we need 5 buckets? Maybe low, normal, high?
... why would you put something on the page that’s “unimportant”

Nate: for us, we need two buckets: stuff that we need now vs later

Ben: we should have “omg” as a priority value; it’s short too :)
... erring on the side of fewer values is a good place to start
... to inform this, we should look on Fetch event? What do current 
browsers expose?
... ideally this should be exposed on SW
... I can raise this at SW group tomorrow
... current priority value looks like a passthrough value

Ryosuke: exposing granular Fetch priorities can get complicated quickly

Todd: not as concerned about footguns
... locking in the definitions can help get interop right
... otherwise developers will build for one browser

Ben: can we look at DOM generated priority across browsers to inform 
this?
... one step could be to sum up different heuristics across browsers?

Todd: I think we have four groups?

Harald: we have 5 groups (highest, high, normal, low, lowest)

Yoav: out of scope, lazy loading of images
... also, script execution order or dependencies

Todd: are there some hero sites where this would make difference?

Addy: google photos, twitter lite, and a few others

Content Policies (Josh)

Proposal 
https://docs.google.com/presentation/d/12W0EwMsyaSQjbWu-RjT-1URSmdLSkWzzvKzi2CvK7uo/edit#slide=id.p

Josh: high-level problem.. Why is my phone warm?
... sometimes 1p/3p frames are using a lot of network or CPU
... what can we do?
... we have feature policy / sandbox to limit features on sandbox
... what about resource consumption?
... UA should be able to enforce this on behalf of the user
... also content may want to enforce policies on itself and 3Ps
... Transfer size policy
... the threshold on the frame is X, fire an event if threshold is 
exceeded
... very simple, we may not be able to unload but info that limit is 
exceeded is still useful
... yes, this will leak some data about embedding resource
... we can limit number of events and pad the estimates to mitigate some 
of this
... you could imagine specifying this via attributes or headers
... [demo: main page loads iframe, which loads data and triggers limit]
... what do we do about it, other than fire an alert?
... e.g. we could pause the document
... there is a related use case of off-screen frames, possibly
... what do we pause? Maybe… loading, rendering, script
... for script we let the task finish but pause other tasks until 
resumed
... pause loading means
... current requests continue, new requests are paused
... not sure what to do for WS, WebRTC, etc
... pause rendering:
... the frame stops its rendering pipeline
... input events from the frame are discarded

Tim: some events are expected to be paired, should think through gotchas 
there

Josh: [demo]
... all network requests are queue, input events discarded

Ben: when paused, do we still show the same elements? Might be confusing 
to the user

Ojan: if browser is pausing we’d like to provide some visual affordance 
to the user
... if script generated then we can defer to the application

Yoav: security wise, what are the implications?

Josh: the goal is to make it no worse than loading and timing

Alex: cache API does something very similar

Josh: other take on this… throttles?
... instead of pause we could progressively throttle

Alex: this is interesting, we already do some of this (throttling and 
pausing)

Ojan: one learning is that we need to expose when limits are enforced
... there are contractual obligations at play in some cases

Yoav: opt-out option could be exposed to “sensitive content” to refuse 
loading

Shubhie: what are the benefits to splitting different pausing 
primitives?

Josh: could be convinced either way, one could imagine that frame might 
still be useful if you pause loading

Ben: this is separate from sandbox?

Ojan: complete vision is we have FP to toggle features, Transfer Size is 
for controlling bytes
... we can explain sandbox on top of FP
... difference is sandbox is the whitelist version: you opt-in to 
sandbox and have to specify what you want reenabled; makes it very hard.

Josh: I’m seeing nodding that this is an interesting place to explore

[nodding]

Ben: we do need to be careful about data leaks here
... also the web compat case and what we would break with pausing
... e.g. cross global communication, corner cases around holding IDB 
locks for hours+

Todd: neither for or against
... TransferSize policy is completely separate
... e.g. could I use this as a user to setup a policy?

Ojan: I was picturing a default UA policy, less user policy
... also, one could imagine enforcing a policy on top-level frame (e.g. 
page is only allowed to use this much data)

Alex: exposed via onload is by itself not super compelling, we might 
want to pad onload as well
... if we can address that, interested in exploring this

Ryosuke: throttling might provide a better story in that respect

setImmediate (Todd)

Todd: sI is supposed to be like setTimeout(0) except without being 
throttled
... the danger is sI is the new sT but we already introduced microtasks
... which give you even more power, so we don’t believe sI should not be 
a footgun
... should we take the time and effort to update sI?
... in Edge we see 25% adoption based on telemetry; being called

Ojan: where does the event go?

Todd: if you think of the queue as FIFO, it goes at the end, but it goes 
to high priority

Ben: we’ve resisted implementing this for a long time because of 
setTimeout(0) and throttling
... we now have better mitigation
... we’re still getting feedback from developers that it would be useful 
(and feedback to contrary)
... I’m interested in exploring it if we can agree on definitions

Ojan: I think the common misconception that sI runs as next task, which 
is not true..

Qingqian: Promise polyfill uses setImmediate

Ben: Dominic has a queue microtask proposal, I think they may be 
orthogonal
... just want to highlight that there are dissenting opinions

Ojan: we should push this discussion to webplat group

Todd: sounds like there is interest in exploring this further

Summary of Action Items

[NEW] ACTION: Ryosuke to file some bugs to clarify

Summary of Resolutions

[End of minutes]

Received on Tuesday, 7 November 2017 02:27:53 UTC