- From: Roderick Sheeter <rsheeter@google.com>
- Date: Thu, 4 Sep 2014 13:39:17 -0700
- To: "Levantovsky, Vladimir" <Vladimir.Levantovsky@monotype.com>
- Cc: David Kuettel <kuettel@google.com>, John Hudson <tiro@tiro.com>, WebFonts WG <public-webfonts-wg@w3.org>
- Message-ID: <CABscrrFKW_iodHRAXjzJemtDSRV3-Nyzuis8EkE_nEGxtK93hg@mail.gmail.com>
Agreed for MUST accept them all. I was picturing a test that caused encoding of the same value repeatedly (maybe via a crafted test font?) and verified that the encoding was always the same. I'm probably thinking more of how one would write unit tests than conformance tests though so perhaps that isn't realistic and we'd have to use the uncapitalized should/must. Cheers, Rod S. On Thu, Sep 4, 2014 at 12:38 PM, Levantovsky, Vladimir < Vladimir.Levantovsky@monotype.com> wrote: > Hi Rod, > > > > Please see my comments inline. > > > > *From:* Roderick Sheeter [mailto:rsheeter@google.com] > *Sent:* Thursday, September 04, 2014 11:18 AM > *To:* David Kuettel > *Cc:* John Hudson; WebFonts WG > *Subject:* Re: Minutes, 3 Sept 2014 webfonts call > > > > With regard to 255Uint16, current implementation appears to produce > consistent results and use the minimum number of bytes. For example, we > would produce [254, 0] for 506 (from spec "For example, the value 506 can > be encoded as [255, 203], [254, 0], and [253, 1, 250]"). > > > > It's not entirely obvious to me that we need to dictate which is the > correct way to encode a given value. If someone figures out a clever hack > where one encoding or another compresses better they should be able to use > it. I do think we should require consistent encoding and suggest use of a > shorter encoding. Perhaps we might change: > > > > An encoder may produce any of these, and a decoder must accept them all, > although encoders should choose shorter encodings, and should be consistent > in choice of encoding for the same value, as this will tend to compress > better. > > > > To something like: > > > > An encoder may produce any of these, and a decoder must accept them all. > An encoder SHOULD choose shorter encoding and MUST be consistent in > encoding for the same value, as this will tend to compress better. > > > > [VL] > > Do you see a way on how these encoder-related statements can be tested? > Capitalized SHOULD and MUST in particular each need to be covered by at > least one conformance test, and I am not sure if these assertions are > testable. On the other hand, the mandate for a decoder to accept any and > all possible encoding is testable so this should be “a decoder MUST accept > them all”, IMO. > > > > Thank you, > > Vlad > > > > > > On Wed, Sep 3, 2014 at 2:20 PM, Roderick Sheeter <rsheeter@google.com> > wrote: > > For minimal benefit in following a recommendation (vs a requirement) of > the OT Spec SHOULD seems perhaps more appropriate than MUST. > > > > On Wed, Sep 3, 2014 at 1:36 PM, David Kuettel <kuettel@google.com> wrote: > > On Wed, Sep 3, 2014 at 10:51 AM, John Hudson <tiro@tiro.com> wrote: > > Apologies for missing today's call. > > > > sergeym: maybe we should require a decoder to put tables in the > order that is recommended by the OT spec, regardless of what > order it was in the original input file. > > > > This is what I was going to suggest too. I wonder if it is a SHOULD or > MUST situation, though? As I understand it, the performance benefit is now > considered minimal, although I'd love to see some comparative data. > > > > This is one change where I don't think we would be able to easily measure > the performance benefit, although Sergey might have some thoughts on how > to. Agreed that the benefit would likely be minimal, esp. in the common > case. For consistency / best practices reasons however, the proposal seems > good. > > > > JH > > > > > > >
Received on Thursday, 4 September 2014 20:39:44 UTC