Re: [WebTiming] HTMLElement timing

Understood. I used to run the engineering department here at Webmetrics so I
understand the cost/benefit decisions that need to be made with any new
functionality. However coming from the web performance industry anything
that could help website owners understand and track their performance better
is exciting to me, especially with the potential that this proposed
functionality provides. All of the existing techniques scratch at the ideal
that this interface allows, giving us the ability to finally track the full
and accurate performance of the end user. It would also help the various
in-browser performance tools report consistent results, which is something
weıve heard customers complain about (especially if this is implemented
across browsers).

Clearly slowing down the user experience is bad. I have nearly zero
knowledge of browser internals, but one thought...allow the website owner to
activate these metrics using a flag, leaving it to them to decide if itıs
worth the added processing time to capture this data.

P.S. I apologize for the multiple submissions...I kept banging my head
against the wall trying to post to the list and for some reason they all
queued up and spammed the list. Technology fail.

On 2/4/10 9:17 AM, "Jonas Sicking" <> wrote:

> On Mon, Feb 1, 2010 at 5:00 PM, Lenny Rachitsky
> <> wrote:
>> > Iıd like to jump in here and address this point:
>> >
>> > ³While I agree that timing information is important, I don't think it's
>> > going to be so commonly used that we need to add convenience features
>> > for it. Adding a few event listeners at the top of the document does
>> > not seem like a big burden.²
>> >
>> > I work for a company that sells a web performance monitoring service to
>> > Fortune 1000 companies. To give a quick bit of background to the monitoring
>> > space, there are two basic ways to provide website owners with reliable
>> > performance metrics for their web site/applications. The first is to do
>> > active/synthetic monitoring, where you test the site using an automated
>> > browser from various locations around the world, simulating a real user. >>
>> > second approach is called passive or real user monitoring, which captures
>> > actual visits to your site and records the performance of those users. This
>> > second approach is accomplished with either a network tap appliance sitting
>> > in the customers datacenter that captures all of the traffic that comes to
>> > the site, or using the ³event listener² javascript trick which times the
>> > client side page performance and sends it back to a central server.
>> >
>> > Each of these approaches has pros and cons. The synthetic approach doesnıt
>> > tell you what actual users are seeing, but it consistent and easy to
>> > setup/manage. The appliance approach is expensive and misses out on
>> > components that donıt get served out of the one datacenter, but it sees
>> real
>> > users performance. The client side javascript timing approach gives you
>> very
>> > limited visibility, but is easy to setup and universally available. This
>> > limited nature of the this latter javascript approach is the crux of why
>> > this ³Web Timing² draft is so valuable. Website owners today have no way to
>> > accurately track the true performance of actual visitors to their website.
>> > With the proposed interface additions, companies would finally be able to
>> > not only see how long the page truly takes to load (including the
>> > pre-javascript execution time), but theyıd also now be able to know how
>> much
>> > DNS and connect time affect actual visitorsı performance, how much of an
>> > impact each image/objects makes (an increasing source of performance
>> > issues), and ideally how much JS parsing and SSL handshakes add to the load
>> > time. This would give website owners tremendously valuable data is
>> currently
>> > impossible to reliably track.
> Hi Lenny,
> I agree that exposing performance metrics to the web page is a good
> idea. I just disagree with the list of elements for which metrics is
> being collected. Every element that we put on the list incurs a
> significant cost to browser implementors, time that could be spent on
> other, potentially more important, features. Just because something
> could be useful doesn't mean that it's worth its cost.
> Additionally, the more metrics that is collected, the more browser
> performance is spent measuring these metrics. So there is a cost to
> every one else, both authors and users, too.
> / Jonas

Lenny Rachitsky 
Neustar, Inc. / Software Architect/R&D
9444 Waples St., San Diego CA 92121
Office: +1.877.524.8299x434  / /    

Received on Thursday, 4 February 2010 20:56:02 UTC