Minutes (was: Re: [1/5/18] webperf group call @ 10AM PST)

available at
https://www.w3.org/2018/01/05-webperf-minutes.html

also in plain text:

WebPerf Group Call
08 Jan 2018
Agenda

Attendees
Present
Douglas, Charles, Philip, igrigorik, Yoav, Philippe, Tim, Todd, Patrick, 
xiaoqian
Regrets
Chair
igrigorik
Scribe
igrigorik
Contents
Topics
NEL
Time to Interactive - tdresser@
Merging specs
next call
Summary of Action Items
Summary of Resolutions
<scribe> scribe: igrigorik

NEL
Doug: NEL
... one of the updates is reporting success and sampling rates
... the content of the spec is more or less aligned with what we had 
internally
... not a huge number of updates, the biggest difference is success 
reports
... might seem a little strange but the reason is that we found that we 
need to know the rate of errors for different populations
... to have that you need a valid denominator
... in theory success rates you can get today, but that requires a 
fairly complex join
... more importantly, there is a use case where 3P is receiving NEL 
reports, doing the join there becomes even more complex
... the proposal is to have success reports as well, with "OK" error 
code

yoav: does that mean for every navigation we trigger a report?

doug: no, we proposed to have sampling rates; with independent sampling 
rates between success and failure rates
... default for failures could be 1.0
... operationally, how we do this today: 100% for failures, and 0.1% for 
success rate

Todd: did you look at client-side aggregation?

Doug: looked at that, per-report basis gives you more fidelity in the 
data coming back
... we've updated the spec with success rate logic

Yoav: CSRF.. how do you tell apart valid vs fake reports?

Doug: for our current implementation this hasn't been a problem because 
we control both client and server; for public spec I share your concern, 
we might want to add some nonce or other similar mechanisms

Patrick: is this based on Reporting, if not why not?

Doug: NEL spec is only responsible for defining errors and report, and 
delegates to reporting
... the separate header is to define the sampling rate, but it's tied to 
Reporting API via report-to group

Patrick: does it makes sense to upstream to reporting API?

Doug: yeah, might makes sense.

What types of errors can/should we report?

Doug: current list should be a fairly good balance
... we have coverage of key segments of the connection
... e.g. TCP handshake failures — e.g. we found that SYNACK visibility 
is helpful

Todd: we need to get networking team and security teams to review

Doug: our security team did review the domain reliability, which 
contained much more detailed lists. with NEL this would be opt-out.

Todd: does the spec call out that UA can/should allow the user to 
opt-out?

Doug: yeah, either globally or per-origin basis
... also, HTTPS-only

Implementation status?

Doug: our goal is to make most of the code for M65, may slip into M66... 
we had domain reliability, which made implementation easier

Yoav: is there a test suite / way to test with web platorm test suite?

Doug: great question, need to look into

<scribe> ACTION: IG to open up GH issue about NEL WPT test

Todd: testing will be very important
... we should pull in Apple team for review, they might have feedback 
based on their net stack (might be a tricky spec for them)

Yoav: TAG review

Patrick: tag me and/or Todd for reviews

Doug: NEL spec itself is good, on Reporting API side, we might have some 
updates to investigate
... it be nice to have some better control how much traffic to 
distribute amongst upload URLs

Time to Interactive - tdresser@
Tim: at TPAC one of the points raised was if/whether different browsers 
prioritize input
... we ran some tests, and it looks like browsers do input 
prioritizations

Todd: key concern is that some sites use APIs that block input; based on 
my own tests things like setTimeout.. yes, browsers do prioritize
... my concern today is the 50ms, I keep sseeing pages that felt great 
but have long tasks pushing out their TTI's

Tim: we're continuing to experiment with the definition, nothing 
concrete yet.

In terms of moving forward.. I think we had consensus on figuring out 
our story for delayed interactivity.

Todd: on Edge side, I'm probably not the owner. I can followup with 
folks internally.

<scribe> ACTION: Tim to followup with Todd & others

Merging specs
Ilya/Todd: we don't have Mozilla, let's defer until we have them in the 
room

Meltdown and Spectre

Tim: for chrome we're still trying to figure out what we'll do long-term

Todd: can't comment on long term strategy for Edge yet

<tdresser> Existing pull request: 
https://github.com/w3c/hr-time/pull/55/files

Todd: we should update the spec to flag that 5us not enough, but we 
don't know what the recommendations are

next call
next call, tentatively: Jan 18th, exact time TBD, will followup with 
Marcos / Moz folks to figure out

scribe: next call is issue triage, Todd will assemble agenda

Summary of Action Items
[NEW] ACTION: IG to open up GH issue about NEL WPT test
[NEW] ACTION: Tim to followup with Todd & others

Summary of Resolutions
[End of minutes]

On 2018-01-05 00:42, Ilya Grigorik wrote:
> Happy 2018 everyone! Our first WG call is tomorrow (Friday @ 10AM
> PST).
> 
> Hangout:
> https://hangouts.google.com/hangouts/_/chromium.org/webperf-wg [1]
> 
> Tentative
> agenda:https://docs.google.com/document/d/10dz_7QM5XCNsGeI63R864lF9gFqlqQD37B4q8Q46LMM/edit#heading=h.38orgzczc6zb
> [2]
> 
> As a reminder, in this call we'll focus on new and in-progress specs
> and proposals:
> 
>  * Update on Network Error Logging status + Chrome implementation
>  * Update on TTI and 2018 roadmap
>  * Check-in on fallout from Spectre and Meltdown attacks
>  * Check-in on spec merge proposal from Mozilla
> 
> ig
> 
> Links:
> ------
> [1] https://hangouts.google.com/hangouts/_/chromium.org/webperf-wg
> [2]
> https://docs.google.com/document/d/10dz_7QM5XCNsGeI63R864lF9gFqlqQD37B4q8Q46LMM/edit#heading=h.38orgzczc6zb

Received on Monday, 8 January 2018 07:09:07 UTC