W3C home > Mailing lists > Public > public-web-perf@w3.org > December 2013

Re: [beacon] no limits, no retry, no batching, post only

From: Jonas Sicking <jonas@sicking.cc>
Date: Mon, 9 Dec 2013 17:55:29 -0800
Message-ID: <CA+c2ei9EhrD5k=AibN-ZGgkbUmK6Q+fYHMyOE8dGt7QZQ0pLuA@mail.gmail.com>
To: Arvind Jain <arvind@google.com>
Cc: Ilya Grigorik <igrigorik@google.com>, Jatinder Mann <jmann@microsoft.com>, "public-web-perf@w3.org" <public-web-perf@w3.org>
On Sun, Dec 8, 2013 at 4:04 PM, Arvind Jain <arvind@google.com> wrote:
> I'm continuing to make changes to the current document. I think we should
> finalize the behavior and then we can debate whether to retrofit to XHR.


> We still need to nail down the size/queue limits. There is no mention of
> queue limits in the document yet.

The more I think about it, the more I think that having strict limits
in the spec won't help us that much. Given that we need to have
max-queued-per-origin limits, we still need a way to signal to the
page if it bumped into those limits or not.

That said, I think it would be good if we who are doing the initial
implementation of these specs could have an implementor-agreement on
what limits to use for now. And maybe even put such a limit as an
non-normative note in the spec, mostly as a documentation for web

So we still need to have a way to signal to the author if a request
was queued or not. The two ways I can think of are:

1. Have a way to query the status of old requests. I.e. you can see if
old requests success/fail and failure reason (size too large, network
problems, etc).
2. Synchronously return true/false at the time when the request is queued.

1 is more feature full, but more work to implement.

2 has the downside that implementing a synchronous true/false is more
work if you want to avoid synchronous communication between the
child-rendering-process and the parent-management-process.

One implementation strategy would be to asynchronously signal to all
child processes how much data is currently queued for a given origin.
Then the child can just compare against that whenever a new request is
queued. This does introduce a small race if two or more child
processes attempt to queue requests at the same time. However the only
harm is that slightly more than allowed data would be queued, which
isn't really a big deal.

There's no way to queue unlimited data anyway, at least not without
opening infinite number of child processes which I suspect would be a
bigger problem anyway if that was possible :)

So I think 2 gets my vote.

/ Jonas
Received on Tuesday, 10 December 2013 01:56:27 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:01:22 UTC