Re: Proposal to measure end-user latency

To my mind, it would be relevant to have the "client view" directly on the
server and with the ability to correlate client and server response time
at a single point.

Application layer (like Google Analytics or any other) doesn't address
real-time IT monitoring. You can always develop your custom application.
Anyway, if the protocol offers directly this kind of service, it could be
useful (always to my mind) and simpler and probably will need less
band-width than a "home made" solution.

A lot of IT have performance problems, end-user complains, and in the same
time server statistics are pretty good.

At least, having the "client view" statistics allows people to speak about
the same measure. Just that, is an improvement.

An other example, Apache starts to measure the response time once he
accepts the request and have a "ready" worker. How many time did you spend
in the listen queue ? This king of me sure could help you to better
monitor the flow. 

Le 03/09/13 18:41, « Martin Thomson » <martin.thomson@gmail.com> a écrit :

>On 3 September 2013 08:32, Sébastien  BARNOUD
><sebastien.barnoud@prologism.fr> wrote:
>> Today, this kind of measurement is achieved at the application layer and
>> sent to dedicated sites.
>
>Is this somehow inadequate in some way?

Received on Tuesday, 3 September 2013 17:07:52 UTC