Re: New work on fonts at W3C

On Jun 23, 2009, at 3:25 PM, "Levantovsky, Vladimir" <Vladimir.Levantovsky@MonotypeImaging.com 
 > wrote:

>
> Applying Unicode-range that encompasses a complete character set for a
> certain language is one thing, but using a selective, non-consecutive
> set of Unicode code-points is something completely different. I wonder
> if anyone actually tried doing this. I might work for languages where
> each character has a unique one-to-one mapping to a single glyph, but
> even then things like kerning will get broken.

I don't know, but I thout it would be a fairly common use case, such  
as using the numbers from a different font or substituting a single  
missing character from another font. It seems like something that  
would need to work right for Unicode ranges to fulfil their promise.


>
>>> I am also stunned that you seem to be suggesting that web authors
>>> should
>>> go to such a great length and make extra efforts in handling fonts
>>
>> I really wonder if you are being genuine when you say things like
>> that. I see absolutely no reason whatsoever why the tool to split a
>> font in two, rename it, and add a little license info should be any
>> more difficult or less automated to use than a tool like WEFT for
>> creating EOT. It might even be a little easier, since it would also
>> generate the @font-face code block and maybe a sample font-family
> rule.
>>
>
> Brad, I am brutally honest with you here.
> I said what I said because I know how difficult this can be. For
> example, in Arabic each character [...] It gets a lot more complex  
> for South Asian languages, and even
> Latin-based languages may use fonts that support these advanced
> features.

Now see, this is why it looks to me as though you are trying to avoid  
the question intentially, in order to maintain the myth about my way  
or Daggett's way being so much more difficult for site authors than  
EOT. I asked you about that specifically, stating once again the fault  
in that logic (it's been in several posts now, and not just from me),  
and instead of addressing that point you re-answer an earlier point.  
It really seems like a deliberate choice to not give a rational  
response to a point you've yet to answer or explain, this insistance  
that one tool would be more trouble than another to use, without any  
rationale for why it would be, and which on the face of it seems an  
absurd point of view. Thus, I'm sorry, but I have to question if you  
are really here to discuss the issues, or just to push an agenda.

> Back to my original point - why would you even want to make web  
> authors
> jump through all these hoops when a very simple conversion step  
> (such as
> compression) can be applied to make font file not directly installable
> as a system font.

Please explain why you think there would be more hoops for a tool that  
does even what John Daggett described than there would be for using  
WEFT or similar.

>
>>
>>> when
>>> targeted font compression seem to present much simpler solution -
>>> compress a font that is hosted on a server, and let browser
>> decompress
>>> it before it passes it on to the OS font engine. To me it seems as
>> the
>>> most straightforward and effortless solution, isn't it?
>>
>> Aside from the separate issue of compression, how is that easier than
>> using a tool to "obfuscate a font that is hosted on a server, and let
>> browser de-obfuscate it before it passes it on to the OS font  
>> engine"?
>>
>
> I didn't say "obfuscate", I said "compress".

I said 'obfuscate' and I said it in a similar sentence structure as  
your sentence, in order to show that what John Daggett and I are  
proposing (obfuscation through font names and/or splitting a font in  
two) is no more of a burden on Web authors than what you are proposing  
with EOT or other wrapper formats.

> If you are referring to lightweight obfuscation proposed by Ascender -

I'm not. In this thread I have been talking about the merits of a tool  
that could do what John Daggett described (which I would call  
obfuscation and communication of license restriction), and possibly  
going even further to render the fonts unusable without it's paired  
companion and a block of css describing the Unicode range to use.

Now it's possible that the part about splitting the font into two  
halves may ultimately prove unworkable, but it is way off base to say  
the running of the tool by the Web author would be a more manual, less  
automated process. There is simply no evidence of that, or reason to  
think that would be true.

> implementing this would be a piece of cake - you just look up a table
> directory in a font and substitute one specific tag by another  
> specific
> tag of the same length.

And what Daggett suggested would be a piece of cake once someone whips  
together a tool to do it (someone here said a child could create such  
a tool), and has the added advantage if working immediately in Webkit  
and Firefox 3.1 without having to add support for new formats,  
wrappers, or EOT.


> The browser would then do the same (only in
> opposite direction). You do not need to analyze the content of the  
> font
> and do all other complex things you suggested in your previous post.

I presume that here you are referring to my comments about CORS, even  
though I really only meant in my latter posts that any lookup on the  
font would occur once, when you were adding the font to the server and  
transfering licensing info to the part of the server that generated  
CORS headers.

But that is quite separate from the discussion about the merits of  
what John Daggett suggested, or about what I suggested about splitting  
the font into two.


>

Received on Wednesday, 24 June 2009 01:25:04 UTC