- From: Pavel Rappo <pavel.rappo@gmail.com>
- Date: Thu, 28 Aug 2014 16:25:30 +0100
- To: ietf-http-wg@w3.org
Hi everyone,
I have several questions regarding Integer Representation, if you don't mind.
(1) First of all, the section "6.1. Integer Representation" contains a
link to Wikipedia's VLQ article. The article states that: "...The VLQ
octets are arranged most significant first in a stream..." (Big Endian
from the byte stream point of view). On the other hand (from what I
understand), encoding/decoding algorithms described in the HPACK draft
works exactly the opposite -- they assume Little Endian:
decode I from the next N bits
if I < 2^N - 1, return I
else
M = 0
repeat
B = next octet
I = I + (B & 127) * 2^M (*)
M = M + 7
while B & 128 == 128
return I
In other words there is: I = I + (B & 127) * 2^M instead of I = I* 2^M
+ (B & 127).
Either there's an error in the draft or I'm not getting it right. So
which one is it?
(2) The second question is this. Does the decoding algorithm "forget"
to add (2^N - 1) to the result in the end?
...
while B & 128 == 128
return I + 2^N - 1
though encoding one "remembers" to do this:
if I < 2^N - 1, encode I on N bits
else
(*) encode (2^N - 1) on N bits
I = I - (2^N - 1)
while I >= 128
encode (I % 128 + 128) on 8 bits
I = I / 128
encode I on 8 bits
(3) And the third question is a minor one. When decoding algorithm
mentions "I", which "I" exactly is that?
decode I from the next N bits
if I < 2^N - 1, return I (*)
...
If I'm not mistaken, we don't know it in that point. The whole purpose
of this code snippet is to get the "I".
-Pavel
Received on Thursday, 28 August 2014 15:26:16 UTC