[minutes] W3C Workshop on Performance 2012-11-08

W3C Workshop on Performance 11/08/2012

The W3C Web Performance working group has recently completed its second chartered period and has been gathering data to decide what areas to focus on in its third chartered period. The working group set up the W3C Workshop on Performance to hear from performance experts and web developers on performance problem areas and use cases. The following day, the working group members met up for a face-to-face meeting to determine which performance ideas would be included in the charter.



The working group would like to thank all of the web developers and performance experts that partook in this workshop - the discussions were extremely valuable! The meeting minutes here capture some of those discussions.



IRC log: http://www.w3.org/2012/11/08-webperf-irc



Meeting Minutes: http://www.w3.org/2012/11/08-webperf-minutes.html



Attendees

Alexandru Chiculita (Adobe), Ethan Malasky (Adobe), Jared Wyles (Adobe), Larry McLister (Adobe), Michelangelo De Simone (Webkit), Peter Flynn (Adobe), Zoltan Horvath (Adobe), Mike McCall (Akamai), Alois Reitbauer (Compuware), Glenn Adams (Skynav), Arvind Jain (Google), Ben Greenstein (Google), Dominic Hamon (Google), Ilya Grigorik (Google), James Simonsen (Google), Patrick Meenan (Google), Robert Hundt (Google), Ganesh Rao (Intel), Chihiro Ono (KDDI), Tomoaki Konno (KDDI), Anant Rao (Google), Kunal Cholera (LinkedIn), Ritesh Maheshwari (Google), Seth McLaughlin (LinkedIn), Viktor Stanchev, Aaron Heady (Microsoft), Derek Liddell (Microsoft), Gautam Vaidya (Microsoft), Jason Weber (Microsoft), Jatinder Mann (Microsoft), Mick Hakobyan, Andrea Trasatti (Nokia), Tomomi Imura (Nokia), Filip Salomonsson (Pingdom), Giridhar Mandyam (Qualcomm), Yosuke Funahashi (Tomo-digi), Eric Gavaletz (UNC), Karen Myers (W3C), Philippe Le Hégaret (W3C), Matt Jaquish, Paul Bakaus (Zynga)



Chairs
Jason Weber

Arvind Jain



Scribe

Jatinder Mann, Karen Myers, Paul Bakaus, Giridhar Mandyam, Alois Reitbauer, Andrea Trasatti, Mike McCall, Ilya Grigorik, Robert Hundt, James Simonsen, Ritesh Maheshwari



Agenda

1.       Introduction to the Workshop

2.       Comparing In-Browser Methods of Measuring Resource Load Times

3.       HTTP Extension to provide Timing Data for Performance Measurements

4.       Extending HTTP and HTML to enable Automatic Collection of Performance Data

5.       Discussion: Expanding and Improving on Performance Timing Interfaces

6.       HTTP Client Hints for Content Adaption without increased Client Latency

7.       Browser Enhancements to Help Improve Page Load Performance using Delta Delivery

8.       Improving Performance Diagnostics Capabilities in Browsers

9.       Improving Web Performance on Mobile Web Browsers

10.    Improving Mobile Power Consumption with HTML5 Connectivity Methods

11.    Memory Management and JavaScript Garbage Collection

12.    Preserving Frame Rate on Television and Web Browsing

13.    Use-case of smart network utilization for better user experience

14.    Open Discussion



--------------------------------------------------------------------------------
Introduction to the Workshop
Presenter:          Philippe Le Hegaret (W3C)
Minutes:            http://www.w3.org/2012/11/08-webperf-minutes.html#item01
Presentation:     http://www.w3.org/2012/Talks/1107-perf-intro/

Philippe gave an introduction to the workshop attendees on the topics to be discussed through the day.

Comparing In-Browser Methods of Measuring Resource Load Times
Presenter:          Eric Gavaletz (UNC)
Subject:             Performance Metrics
Minutes:            http://www.w3.org/2012/11/08-webperf-minutes.html#item02
Presentation:     https://docs.google.com/presentation/d/1T-9cA2xyjrZIxcPEhY9kyEwjcnB0kPXpfaceBmTGWGw/edit

Eric presented a study on comparing in-browser methods of measuring resource load times that he and his colleagues had conducted at the University of North Carolina. In this study, they used various methods of measuring how long it takes to load a resource, using the DOM, XHR and Navigation Timing APIs, and compared those results with the ground truth. They found that though these interfaces did a reasonable job of measuring the timing information, there were differences in measurements amongst the browsers due to internal implementation differences.

Discussion Takeaways:

-          Browsers have internal implementation differences on when they are finished loading, which may result in differences in timing information when comparing this data cross-browser.

-          Resource Timing API, now available in IE10 and shortly will be available in Chrome, is a better API for measuring resource loading information.

HTTP Extension to provide Timing Data for Performance Measurements
Presenter:          Mike McCall (Akamai)
Subject:             Performance Metrics
Minutes:            http://www.w3.org/2012/11/08-webperf-minutes.html#item03
Presentation:     http://www.w3.org/2012/11/webperf-slides-mccall.pptx


In this presentation, Mike shared a proposal for extending HTTP such that browsers would automatically send up the timing information (Navigation, Resource and User Timing) to a web server without having to explicitly call the JavaScript APIs in the context of the page. The proposal has three steps: (A) UA sends a request header, Accept-Measurement, at the initiation of a HTTP session, (B) server negotiates with the UA to determine which measurements should be sent, as well as a TTL for the data collection, (C) once all measurements have been collected or TTL has expired, the measurements are beaconed back in an HTTP POST with a Timing-Measurements header.

Discussion Takeaways:

-          Pro: The benefit of this proposal is that one does not need to run client side script to gather the Timing data.

-          Pro: Hosting services can provide the ability to gather Timing analytics for the sites they host, purely from the server side without any change to client code. One can imagine a portal on the hosting service that enables timing information gathering.

-          Con: This proposal does "bloat" the size of the HTTP headers and moves filtering logic from the client side to the server side.

-          Con: It's unlikely that current analytics script would go away even with this feature, reducing some of the performance benefits.

-          Con: The proposal does not currently address the case where the server for processing Timing data is not the same server as the one serving the data.

Extending HTTP and HTML to enable Automatic Collection of Performance Data
Presenter:          Philippe Le Hégaret (W3C, for Radware)
Subject:             Performance Metrics
Minutes:            http://www.w3.org/2012/11/08-webperf-minutes.html#item04
Presentation:     http://www.w3.org/2012/10/Automatic%20Web%20performance%20measurement.docx


Philippe presented a proposal, on behalf of Radware, for extending the HTTP and HTML standards for enabling automatic collection of web page timing data; a concept similar to the previous presentation by Mike McCall. In this proposal, Radware suggest a few options in how to gather this data: (A) a new HTTP header, Performance_Reporting_Target: <URL for reporting>\r\n, or (B) a new Boolean Element attribute called perfcollect. The data would be sent back to the server in a HTTP POST and would include entries from the PerfromanceResourceTiming object.

Discussion Takeaways:

-          This proposal has similar pros and cons as the previous proposal by Mike McCall.

-          Con: Parsing the entire document for Element attributes may have a performance impact.

-          Con: Setting Element attributes may not be easy for elements that are added dynamically.

-          Con: Hiding a POST within a GET is not a good idea.

Discussion: Expanding and Improving on Performance Timing Interfaces
Presenter:          Jatinder Mann (Microsoft)
Subject:             Performance Metrics
Minutes:            http://www.w3.org/2012/11/08-webperf-minutes.html#item05
Presentation:     http://www.w3.org/2012/11/webperf-list.html


This session consisted of a discussion on the ideas raised in the survey results. There were three areas that were most raised in the survey: expanding Navigation Timing, new performance metrics, and error logging interfaces.

Discussion Takeaways:

-          Though there was interest in a networkType (e.g., radio, wired), radioType (e.g., edge, 3G, LTE, wifi), and networkSpeed (running average for last N web requests), these seemed more appropriate in the Network Information API<http://dvcs.w3.org/hg/dap/raw-file/tip/network-api/Overview.html>, which already attempts to answer some of these questions. Additionally, Radio type can change frequently for mobile devices and may not be as useful for post-processing.

-          There was interest in expanding the timing APIs to include the chunk transfer encoding (CTE) timing.

-          Though web developers like the concept of a firstPaint attribute (time to paint page above the fold), this is a very difficult question to answer from a browser perspective. It's not easy to specify in the standard at what point is the page painting above the fold (e.g., pages will paint a number of times before they are fully rendered above the fold).

-          There were asks on an API that returns the true frame rate of painting to the screen. Today we can get the rate of script callbacks to determine frame rate, but this isn't the same thing as the rate of painting. Getting the true rate of painting may be another data pointing to when the application performance needs improvement, but it may not be as useful to developers as script call back rate, which can be calculated today. Also, some browsers always paint on the display paint beat, meaning they will always return 60FPS.

HTTP Client Hints for Content Adaption without increased Client Latency
Presenter:          Ilya Grigorik (Google)
Subject:             Page Load Performance
Minutes:            http://www.w3.org/2012/11/08-webperf-minutes.html#item06
Presentation:     https://docs.google.com/document/d/1xCtGvPbvVLacg45MWdAlLBnuWa7sJM1cEk1lI6nv--c/edit#<https://docs.google.com/document/d/1xCtGvPbvVLacg45MWdAlLBnuWa7sJM1cEk1lI6nv--c/edit>


Ilya presented a proposal on allowing user agents to provide HTTP client hints to the server for content adaption. The problem today is that there are many different devices accessing the web, with different capabilities and preferences. Today, web developers either load resources that may not be used by the UA or use "JavaScript loaders" to detect the UA and load the appropriate resources. This proposal allows the user agent to give the server hints on its capabilities within the HTTP header. The server can then serve up exactly the resources appropriate for the UA, reducing the time spent on the wire.

Discussion Takeaways:

-          Pro: As only the appropriate resources need to be downloaded, time spent in the networking subsystem should decrease.

-          Pro: UA detection, which is a poor model, can be significantly reduced.

-          Con: Potentially can bloat the HTTP headers and forces there to be more server side computation

-          Con: The client hints will make fingerprinting much more easier

-          Con: Proposal needs to address dynamic switching where UA changes its capabilities (e.g., landscape mode to portrait mode).

Browser Enhancements to Help Improve Page Load Performance using Delta Delivery
Presenter:          Robert Hundt (Google)
Subject:             Page Load Performance
Minutes:            http://www.w3.org/2012/11/08-webperf-minutes.html#item07
Presentation:     http://www.w3.org/2012/11/webperf-slides-hundt.pdf


Robert presented a proposal for helping improve page load performance using delta delivery. Today, Gmail on average only takes a few seconds to load, however, in the higher percentiles, it can take up to minutes to load. This typically correlates to some geographies where network bandwidth is slow. They have found that in the slower initial page load cases are dominated by time spent downloading JavaScript and CSS. This proposal suggests sending only different (delta) of a resource between what the client has (in cache or local storage) and the latest version. The deltas would be encoded in the efficient VCDIFF format. Experimentation has shown that this would improve downloads time by 11% in the median load cases, but up to 50% in the 99 percentile cases (in places like India). To do delta delivery, changes will need to be made to the HTTP protocol (client will need to indicate current cached version of the content and servers will need to know that they only need to send a delta), cryptography APIs will need to be exposed, and pre-loading of "all" cached JavaScript will need to be done.


Discussion Takeaways:

-          Attendees seemed to really like the idea, however, there are significant changes that would need to be made in many places to enable this feature.

-          Pro: Delta delivery can significantly improve resource download times in places with poor bandwidth.

-          Con: This feature would require changes to the HTTP standard. Changing HTTP is something that should be considered in the HTTP2.0 (or next version) discussions.

-          Con: Antivirus vendors may not be able to determine if the delta is malicious, as they won't have access to the entire script.

-          Con: How UAs use their In memory cache will need adjustments to take the delta delivery into consideration.

Improving Performance Diagnostics Capabilities in Browsers
Presenter:          Alois Reitbauer (Compuware)
Subject:             Error Logging
Minutes:            http://www.w3.org/2012/11/08-webperf-minutes.html#item08
Presentation:     https://docs.google.com/document/d/1Yw_qNMCnGsWQRsdcrifzh_kzFey-ThQ3yp-DmxuJxvg/edit

Alois presented a proposal where browsers would surface more browser diagnostic information through standardized APIs to web developers. Today, developers have access to some diagnostics information in individual developer tools for specific browsers, however, this makes it hard to collect common metrics as analyst have to use multiple different tools and approaches. Such an API can be used to analyze performance across browsers, client-side monitoring of a web application in production, resolving user complaints, understanding impact of third party code on a web application and other reasons. The proposal here is to provide more information via JavaScript APIs on JavaScript execution hot spots, memory information, layout and rendering hotspots and other areas.

Discussion Takeaways:

-          The goal of this API to provide more browser diagnostic information to web developers. However, there were concerns that capturing this data may have a performance impact (will need to be turned on by a flag and not always on), can be a lot of data to put on the wire (profiling tools like Windows Performance Toolkit save MBs of data per second), and what developers are going to do with this information (e.g., if developers need to be worrying about which of their code paths are being JITed, script engines aren't doing their jobs). Understanding which data points can web developers use to make actionable changes and providing a more scoped API may be the solution here.

Improving Web Performance on Mobile Web Browsers
Presenter:          Ben Greenstein (Google)
Subject:             Mobile Performance
Minutes:            http://www.w3.org/2012/11/08-webperf-minutes.html#item09
Presentation:     http://www.w3.org/2012/11/webperf-slides-greenstein.pdf


In this session, Ben gives details on the state of web performance on mobile web browsers. Today, desktop browsing is relatively fast, whereas mobile web performance is bad; pages don't usually load in under a second (average mobile page load is 9 seconds). He found that mobile web performance is highly variable, available bandwidth has a wide distribution, small changes to location affect bandwidth, time of day affect bandwidth, and performance is variable based on carriers. He also found that gzip is off for 20% of the Alexa-1000 sites, 57% of resources don't have cache-control headers, and in many cases resources are much larger than they need to be. The ask in this presentation was that we provide better tools to measure page loads, inform origins of expected performance to get different content sent, and help developers diagnose problems.

Improving Mobile Power Consumption with HTML5 Connectivity Methods
Presenter:          Giridhar Mandyam (Qualcomm)
Subject:             Mobile Performance
Minutes:            http://www.w3.org/2012/11/08-webperf-minutes.html#item10
Presentation:     http://www.w3.org/2012/11/webperf-slides-mandyam.pdf


In this session, Giridhar presented on HTML5 connectivity APIs, like Web Sockets and WebRTC, and their impact on mobile power consumption, which is potentially significant. This session gave best practices on how to use Web Sockets and WebRTC to both connect and manage power consumption. The ask of this presentation was that this working group work to provide better best practices on using HTML5 APIs in a power-efficient way, ensure performance for implementations of new W3C battery API, indicate to web developers whether cellular QoS is being leveraged in a persistent connection session, and explicit metrics regarding the state of the connection.

Discussion Takeaways:

-          Most web developers today do not seem concerned with battery life implications of their application. More education is needed.

-          Today, Internet Explorer coalesces timers to improve power consumption and implements setImmediate to improve CPU efficiency; the ask is that other browser vendors consider similar techniques to improve power- and CPU-efficiency.

Memory Management and JavaScript Garbage Collection
Presenter:          Paul Bakaus (Zynga)
Subject:             Memory Management
Minutes:            http://www.w3.org/2012/11/08-webperf-minutes.html#item11
Presentation:             https://docs.google.com/presentation/d/1a1NfQmRtuQYtBgfPVVHQBwWzDgmPaHASsyMFWLwWMbI/edit#slide=id.p


In this session, Paul gave a talk on the importance of runtime memory management in gaming scenarios. Fast loading helps bring players to the game, but runtime performances keeps them playing. Today, web developers don't have insight into many of the browser's internal memory managements. For example, a developer knows when a resource is loaded into memory but there is no way to unload the resource, there is no information on when textures are still alive or released on the GPU and garbage collection can occur at an inopportune time. The ask is that JavaScript APIs be given that allow trigger GC manually, give GC timing, disable GC, and give more browser memory information.

Discussion Takeaways:

-          Most web developers will find manual garbage collection scary and may not chose the most optimal path.

-          Potentially, the application can give hints to the browser, and the browser can use those hints to make better GC decisions.

-          Most memory information is not really useful to developers, as it may include total system memory (and developers don't know what other applications are running at that time) and different machines have different characteristics. We may want to better understand what the problem we are trying to solve and then work towards solutions.

Preserving Frame Rate on Television Web Browsing
Presenter:          Yosuke Funahashi (Tomo-Digi)
Subject:             Responsiveness
Minutes:            http://www.w3.org/2012/11/08-webperf-minutes.html#item12
Presentation:     http://www.w3.org/2012/11/webperf-slides-funahashi.pdf


In this session, Yosuke gave a presentation on the importance of preserving frame rate on television web browsing. Traditionally, television viewing has not resulted in dropped frames for users. However, with the web browser runtime working in televisions today, users are experiences scenarios where frames are being dropped. The ask is that the Web Perf working group, Web and TV interest group, and Web and Broadcasting business group work together to consider ways to ensure frame rates are not dropped during television web browsing.

Use Case of Smart Network Utilization for Better User Experience
Presenter:          Chihiro Ono (KDDI)
Subject:             Responsiveness
Minutes:            http://www.w3.org/2012/11/08-webperf-minutes.html#item13
Presentation:     http://www.w3.org/2012/11/webperf-slides-ono.pdf


In this session, Chihiro discusses ways in browsers and servers can use network information (e.g., Wifi or 3G or LAN), and provide content that is best suited for those user environment. For example, on a LAN network, high quality video/audio/images can be sent down, however, on a 3G network, lower quality video/audio/images can be sent to improve the performance and user experience. Some of the suggestions in this session are to provide APIs that give more detailed information on the network usage and control network interfaces for fine-grained network selection.

Open Discussion
Presenter:          Jatinder Mann (Microsoft)
Minutes:            http://www.w3.org/2012/11/08-webperf-minutes.html#item14


In the last session of the day, we opened up the floor for discussion on any of the topics presented today or topics that had not been brought up.

Discussions:

-          Error Logging. Aaron Heady from Bing proposed an error logging API, an error logging analogous to the performance timing APIs. Today, site operators do not have real end-user availability data for their sites. Synthetic (Gomez) tests don't replicate all possible end user-agents and they can never get data when user requests to their site fail completely; i.e. TCP connect errors. As a result, they cannot accurately monitor or debug their sites in a truly global and real-time manner. In this discussion, Aaron proposed a standard browser-based availability telemetry collection and retrieval that (A) Collects error and availability data in the browser, (B) Stores that data locally across requests/restarts so that it can be accessed at a later time, (C) Provides JavaScript access to the stored data so subsequent same origin requests can poll the data and send it back to the origin at their discretion.  This concept was well received by the attendees.


-          Beacon API. In last year's TPAC meeting, the working group had discussed the scenario where scripts will block the current page from unloading by running in a loop in order to send analytics data to a web server. This behavior causes the perception of poor performance, as it appears the navigation is delayed. We discussed a beacon API that would send data back to a server without waiting for a response that the data has been sent. It would be a fire and forget model; the browser makes the promise that there data will be sent to the server. This concept was well received by the attendees.


-          Memory Leaks. There was discussion on providing more information on heap memory and other memory data points. However, while looking at the reason for having this information, the ask is really for web developers to understand when they are leaking objects. A memory leak API or developer tooling would help developers minimize object leaks.


-          Other. There were additional discussions on "above the fold" measurements, profiling tools, and other discussions. Please see the minutes for more information.


Thanks,
Jatinder Mann, for the W3C Web Performance WG

Received on Wednesday, 14 November 2012 00:21:51 UTC