- From: Zhiheng Wang <zhihengw@google.com>
- Date: Wed, 3 Feb 2010 13:56:30 -0800
- To: Lenny Rachitsky <lenny.rachitsky@webmetrics.com>
- Cc: Olli@pettay.fi, public-webapps@w3.org
- Message-ID: <802863261002031356r143f4c1bj80e6fd7c2ded802c@mail.gmail.com>
Somehow Lenny's comments got lost from the list. On Tue, Feb 2, 2010 at 10:57 AM, Lenny Rachitsky < lenny.rachitsky@webmetrics.com> wrote: > I’d like to jump in here and address this point: > > “While I agree that timing information is important, I don't think it's > > going to be so commonly used that we need to add convenience features > for it. Adding a few event listeners at the top of the document does > not seem like a big burden.” > > I work for a company that sells a web performance monitoring service to > Fortune 1000 companies. To give a quick bit of background to the monitoring > space, there are two basic ways to provide website owners with reliable > performance metrics for their web site/applications. The first is to do > active/synthetic monitoring, where you test the site using an automated > browser from various locations around the world, simulating a real user. The > second approach is called passive or real user monitoring, which captures > actual visits to your site and records the performance of those users. This > second approach is accomplished with either a network tap appliance sitting > in the customers datacenter that captures all of the traffic that comes to > the site, or using the “event listener” javascript trick which times the > client side page performance and sends it back to a central server. > > Each of these approaches has pros and cons. The synthetic approach doesn’t > tell you what actual users are seeing, but it consistent and easy to > setup/manage. The appliance approach is expensive and misses out on > components that don’t get served out of the one datacenter, but it sees real > users performance. The client side javascript timing approach gives you very > limited visibility, but is easy to setup and universally available. This > limited nature of the this latter javascript approach is the crux of why > this “Web Timing” draft is so valuable. Website owners today have no way to > accurately track the true performance of actual visitors to their website. > With the proposed interface additions, companies would finally be able to > not only see how long the page truly takes to load (including the > pre-javascript execution time), but they’d also now be able to know how much > DNS and connect time affect actual visitors’ performance, how much of an > impact each image/objects makes (an increasing source of performance > issues), and ideally how much JS parsing and SSL handshakes add to the load > time. This would give website owners tremendously valuable data is currently > impossible to reliably track. > > -- > Lenny Rachitsky > Neustar, Inc. / Software Architect/R&D > 9444 Waples St., San Diego CA 92121 > Office: +1.877.524.8299x434 / lenny.rachitsky@webmetrics.com / > www.neustar.biz > > On 2/2/10 10:36 AM, "Zhiheng Wang" <zhihengw@google.com> wrote: > > Hi, Olli, > > On Fri, Jan 29, 2010 at 6:15 AM, Olli Pettay <Olli.Pettay@helsinki.fi> > wrote: > > On 1/27/10 9:39 AM, Zhiheng Wang wrote: > > Folks, > > Thanks to the much feedback from various developers, the WebTiming > specs has undergone some > major revision. Timing info has now been extended to page elements and a > couple more interesting timing > data points are added. The draft is up on > http://dev.w3.org/2006/webapi/WebTiming/ > > Feedback and comments are highly appreciated. > > cheers, > Zhiheng > > > > Like Jonas mentioned, this kind of information could be exposed > using progress events. > > What is missing in the draft, and actually in the emails I've seen > about this is the actual use case for the web. > Debugging web apps can happen outside the web, like Firebug, which > investigates what browser does in different times. > Why would a web app itself need all this information? To optimize > something, like using different server if some server is slow? > But for that (extended) progress events would be > good. > And if the browser exposes all the information that the draft suggest, it > would make sense to dispatch some event when some > new information is available. > > > Good point and I do need to spend more time on the intro and use cases > throughout > the specs. In short, the target of this specs are web site owners who want > to benchmark their > user exprience in the field. Debugging tools are indeed very powerful in > development but things > could become quite different once the page is put to the wild, e.g., there > is no telling > about dns, tcp connection time in the dev space; UGC only adds more > complications to the > overall latency of the page; and, "what is the right TTL for my dns record > if I want to maintain > certain cache hit rate?", etc. > > > There are also undefined things like paint event, which is > referred to in lastPaintEvent and paintEventCount. > And again, use case for paintEventCount etc. > > > Something like Mozilla's MozAfterPaint? I do need to work on more use > cases. > > > > The name of the attribute is very strange: > "readonly attribute DOMTiming document;" > > > agreed... how about something like "root_times"? > > > > > What is the reason for timing array in window object? Why do we need to > know anything about previous pages? Or what is the timing attribute about? > > > Something got missing in this revision, my bad. The intention is to keep > previous pages' timing info only if these pages > are all in a direction chain. From the user's perspective, the waiting > begins with the fetching of the first page in a > redirection chain. > > > thanks, > Zhiheng > > > > > > -Olli > > > > > > > >
Received on Wednesday, 3 February 2010 21:57:03 UTC