- From: Jonathan Kew <jonathan@jfkew.plus.com>
- Date: Wed, 1 Jul 2009 23:44:35 +0100
- To: "Tab Atkins Jr." <jackalmage@gmail.com>
- Cc: www-font@w3.org
On 1 Jul 2009, at 21:56, Tab Atkins Jr. wrote: > I'm also fine with this sort of proposal (in addition to raw TTF/OTF > support), though I'd prefer compression be mixed into the proposal as > well. The proposal already includes subsetting support for > bandwidth-conservation reasons, and good compression has been shown to > have a significant effect on several fonts. We've already gotten some > interesting looks at the efficacy of different compression techniques, > and it shouldn't be difficult to assemble a representative sample of > fonts and do some direct testing among the different proposals (which > Vladimir listed). Testing isn't difficult, although deciding what constitutes a "representative sample of fonts" might be trickier. Using gzip to compress various fonts, I've seen size reductions varying from around 36% up to 64%, with most fonts somewhere close to the 50% mark. We also know that LZMA and MTX both tend to do better. > I don't see any good reason to make compression optional in a new > webfont format. Using existing uncompressed TTF/OTF has benefits (no > effort at all, and already works in most major UAs), but if we're > going to the effort of defining a new standard webfont format as well, > we might as well make it truly worth it. Yes, this is my view as well. I believe useful compression can be included without significantly greater effort for implementers (slightly greater, yes, but not enough to present a hindrance to adoption), and of course with no additional burden at all for users, and will provide long-term benefits in real-world usage. And of course, if compression is a standard part of the format (rather than optional), there is no longer any need for table-name obfuscation. > It's been shown that > standard gzip is significantly less efficient than certain other > methods, Agreed. Personally, I think the stability and ease of implementation that gzip offers outweigh this; if vendors can support the format with a few dozen (ok, maybe a few hundred) lines of code wrapped around calls to zlib, this is a considerably lower barrier to adoption than adding a dependency on lzmalib (whose license would, I think, prevent some browsers using it) or the LZMA SDK (a lower-level and more complex interface, as I understand it), or including a considerable amount of new code to support MTX. So my personal preference would be for gzip/zlib-based .zot, but I'd be happy to implement any of these that we can agree on. > so it would be worthwhile in my opinion to standardize on one > of these and make it part of the proposal. > > However, I just saw your most recent email, and agree that including > compression would be more difficult than not doing so, but I still > strongly feel that the benefits are worthwhile here. Having just written up the draft ZOT format this morning, I went ahead and wrote a compression tool this evening (of course, virtually all the real work is done by freely-available libraries). As a Perl script, the .ttf/.otf-to-.zot compressor is less than 50 lines of code. A decompressor would be very similar. I don't think that's an excessive level of difficulty, for the benefits such a format would give us. Anyone interested is welcome to a copy, of course. JK
Received on Wednesday, 1 July 2009 22:45:21 UTC