On Sun, Jan 6, 2013 at 1:55 AM, Roberto Peon <grmocg@gmail.com> wrote:
> Do you have some suggestions Martin?
> The obvious thing in my mind is to get submissions from site owners, but
> that takes interest on their part first. :/
>
HTTP Archive is now scanning ~300K top domains (at least according to
Alexa). While its still "top site" biased, I think that's a pretty good
sample to work with. I believe we should be able to get the HAR files from
it.
ig
>
> On Sun, Jan 6, 2013 at 12:53 AM, "Martin J. Dürst" <duerst@it.aoyama.ac.jp
> > wrote:
>
>> On 2013/01/06 14:57, Mark Nottingham wrote:
>>
>>> Quick follow-up:
>>>
>>> I posted more about this here:
>>> http://www.mnot.net/blog/2013/**01/04/http2_header_compression<http://www.mnot.net/blog/2013/01/04/http2_header_compression>
>>>
>>> In particular, we have graphs for all of the HAR samples I took earlier:
>>> http://http2.github.com/http_**samples/mnot/<http://http2.github.com/http_samples/mnot/>
>>>
>>
>> These look very interesting. Just two points for the moment:
>>
>> - Drawing connected curves seems misleading, because we are not
>> mesuring/showing a continuous quantity that varies over time, but discrete
>> requests and responses.
>>
>> - The data sample includes big guys only. Some criticism of speedy has
>> said that it is geared towards the big guys. Is there a way to get some
>> more of an impression of how headers look at the long tail of websites?
>>
>> Regards, Martin.
>>
>>
>