W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2016

Re: Submitted new I-D: Cache Digests for HTTP/2

From: Stefan Eissing <stefan.eissing@greenbytes.de>
Date: Wed, 13 Jan 2016 10:14:59 +0100
Cc: HTTP Working Group <ietf-http-wg@w3.org>
Message-Id: <13DD5C05-4C1C-4FA3-B62F-9DADFD88A8EA@greenbytes.de>
To: Kazuho Oku <kazuhooku@gmail.com>

> Am 12.01.2016 um 22:33 schrieb Kazuho Oku <kazuhooku@gmail.com>:
> 
> 2016-01-13 1:31 GMT+09:00 Stefan Eissing <stefan.eissing@greenbytes.de>:
>> Kazuho,
>> 
>> while pondering my implementation: ch. 2.1 step 8 is different from the algorithm described in http://giovanni.bajo.it/post/47119962313/golomb-coded-sets-smaller-than-bloom-filters
>> 
>> Your draft calculates (as I read it, I could be wrong)
>> A.  for i in 0..len-2; do D:="hash-values[i+1] - hash-values[i] - 1"; ....; done
>> 
>> while the latter does
>> B.  for i in 0..len-1; do D:="hash-values[i] - (i > 0)? hash-values[i-1] : 0)"; ....; done
>> 
>> I am not sure, on decompression, how to obtain the first hash value back from A. In B it just is the first delta. Could you give me a hint?
> 
> The algorithm intended here is as follows.  Please let me check the spec.
> 
> uint64_t next_min = 0;
> for (i = 0; i < hash_values.length; ++i) {
>  D = hash_values[i] - next_min;
>  ...
>  next_min = hash_values[i] + 1;
> }
> 
> The corresponding code in my implementation is
> https://github.com/kazuho/golombset/blob/e7cae04/golombset.h#L154-L158

Thanks. That is how I understood the original article (apart from the +1, which is clever). 

I'll continue my implementation and, based on that, may make a proposal for an alternative description of the algorithm, if that is fine with you.

Cheers,

  Stefan
Received on Wednesday, 13 January 2016 09:15:29 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 22 March 2016 12:47:10 UTC