W3C home > Mailing lists > Public > public-web-perf@w3.org > November 2013

Re: [minutes] W3C Web Performance F2F Meeting 11/14/2013 - 11/15/2013

From: Jonas Sicking <jonas@sicking.cc>
Date: Fri, 22 Nov 2013 18:38:15 -0800
Message-ID: <CA+c2ei8c9Jip6yrqgdx2FtFf_0MHj8ikHm5T8kXYiav=7gvvNQ@mail.gmail.com>
To: Jatinder Mann <jmann@microsoft.com>
Cc: "public-web-perf@w3.org" <public-web-perf@w3.org>
On Fri, Nov 22, 2013 at 5:19 PM, Jatinder Mann <jmann@microsoft.com> wrote:
> The WG discussed whether we should apply a limit to the number of beacons
> that can be sent or the size of the beacons sent. While we had initially
> considered a limit of 10KBs, after some discussion, we saw real world
> examples of much larger data being sent. We eventually decided to not limit
> the size of the beacons, as limits set now may not feel rational years from
> now. We opened ACTION-114 - Update beacon spec to have no limits, no retry
> logic, no batching, post is the only method to track this.

I'm worried that with this policy it will become very unattractive for
pages to send "larger" requests. Essentially it means that there is no
way of knowing if your request is lost in the void, or if it got sent
to the server. This means that such pages will have to do a lot of
UA-detection and archeology to figure out if a given request will work
or not. Such UA-detection always punishes new browsers or browsers
with low marketshare.

At the very least we need to have some way for a page to detect that a
given request was considered too large for the UA.

Initially we were thinking that the API could return true/false to
indicate that the request was of acceptable size. Unfortunately this
would require the UA to synchronously make this decision which is
problematic in multi-process browsers that want to put global limits
on how much data can be queued.

But returning a promise would also not work since a primary use case
is queuing beacons during onunload, which means that you won't have
the opportunity to see the promise resolved.

So either we need to use the synchronous solution and simply accept
the performance hit.

Or we expose some API which can inspect success/error results from
previous requests. This would also enable the page to see if requests
failed due to network problems the next time the user goes back to the
site.

But of course that also makes the API significantly more complex.

/ Jonas
Received on Saturday, 23 November 2013 02:39:14 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:04:37 UTC