- From: Mark Nottingham <mnot@mnot.net>
- Date: Fri, 18 Jan 2013 17:52:59 +1100
- To: Martin Thomson <martin.thomson@gmail.com>
- Cc: James M Snell <jasnell@gmail.com>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
OK, I've started to record CPU time in my refactor branch:
* TOTAL: 1012 req messages
size time | ratio min max std
http1 830,970 0.05 | 1.00 1.00 1.00 0.00
simple 320,883 0.05 | 0.39 0.07 0.92 0.24
spdy3 85,492 0.06 | 0.10 0.03 0.66 0.08
* TOTAL: 1012 res messages
size time | ratio min max std
http1 424,075 0.04 | 1.00 1.00 1.00 0.00
simple 176,216 0.12 | 0.42 0.11 0.95 0.12
spdy3 80,706 0.07 | 0.19 0.04 0.68 0.09
https://github.com/http2/compression-test/tree/stream-sep
On 17/01/2013, at 12:21 PM, Martin Thomson <martin.thomson@gmail.com> wrote:
> On 16 January 2013 17:11, Mark Nottingham <mnot@mnot.net> wrote:
>> Getting there, although you may need a small truck to haul the grain of salt that will accompany it...
>
> Even if it just means running the sample set n times using 'time', it
> would be nice to get ballpark figures. Even if the errors are
> enormous. For instance, a python implementation of delta is probably
> not that good to compare to an optimized gzip implementation, but it
> might still let us know that deltav3 is better than deltav2.
>
--
Mark Nottingham http://www.mnot.net/
Received on Friday, 18 January 2013 06:53:27 UTC