W3C home > Mailing lists > Public > public-css-archive@w3.org > October 2019

Re: [csswg-drafts] [css-fonts] Proposal to extend CSS font-optical-sizing (#4430)

From: Rob McKaughan via GitHub <sysbot+gh@w3.org>
Date: Thu, 24 Oct 2019 20:06:15 +0000
To: public-css-archive@w3.org
Message-ID: <issue_comment.created-546080946-1571947574-sysbot+gh@w3.org>
Windows rendering system at its most fundamental level assumes 96 pixels per inch. More accurately: it has the concept of 96 Device Independent Pixels (DIPs) per physical inch. 

When Windows boots up, it queries the EDID from each of the displays to get the pixel counts and physical size, computes the physical ppi for the device (96ppi is the floor, so projectors and giant displays are treated as 96ppi). Then, it computes a scale level between the physical ppi and 96DIPs. If your monitor's 96ppi, then you'll be running at a scale of 100%. If 144ppi then 150%. The scale factor is not continuous (e.g. you can't do 107.256%), but is a step function amongst a set of fixed levels (e.g. 100%, 125%, 150%, ...) because there are a lot of bitmaps UI assets in applications and they can't support arbitrary scaling. The user can change this scale factor any time they want in the system display settings. 

The nutshell of this is: Windows does a best-effort to get 96DIPs to be one physical inch, but it's not always precisely that. The rest of the rendering stack is based on that.

So, for Windows we can assume the following:

1. There are 96DIPs in an inch (best effort).
2. 72 points in the API = 96DIPs
3. The OS and applications either use API points or DIPs. (I believe browsers generally use DIPs as that's what DirectX is built around). 

So, since 72 CSS points = 96 CSS pixels, and (on Windows) 1 CSS pixel = 1 DIP and 96 DIP = 72 API points ~= 72 typographic points (best effort), then it is reasonable for browsers on Windows to use CSS Points (or at least 4/3 CSS pixels) for opsz.

But, actually, all this fuss to get close to 1/72 of an inch on a physical ruler misses a key point:

As I pointed out back in #807, opsz should vary based on the _document type size_, not the rendered type size. There are a lot of really legitimate reasons for the type size to be completely different than the opsz (e.g. for the severely vision-impared, they may have text rendered at 3 inches high on the screen, yet they absolutely need all the legibility features built into the opsz=12point). But, it should remain consistent within a document. 

So, it makes more sense to use the units of the document. For HTML, that's CSS pixels and points (which are already defined in a 96:72 ratio). How those pixels and points map to typographic points, DIPs, or whatever unit system a given OS uses, is, I believe, not relevant as it's outside the context of the document. Within the context of an HTML document, there's only CSS pixels and CSS points. I recommend that opsz be set to CSS points as, within the context of the doc, that's the most relevant measure.

That document-centric view may also be helpful to avoid platform-dependent mapping issues. If browsers relied on underlying OS rules (e.g. 1 CSS pixel = 1 pixel = 1 typographic point; or 96 CSS pixels = 96 DIPs = 72 typographic points), then rendering would also end up being platform-dependent, which I don't think anyone wants, or force all browsers to adopt the underlying rules of only one platform, which I believe is the current state (if I'm not misinterpreting). The document-centric view keeps rendering consistent within the document, affords usability and other scenarios, and keeps everything independent of underlying platform assumptions.

/@gr3gh



-- 
GitHub Notification of comment by robmck-ms
Please view or discuss this issue at https://github.com/w3c/csswg-drafts/issues/4430#issuecomment-546080946 using your GitHub account
Received on Thursday, 24 October 2019 20:06:17 UTC

This archive was generated by hypermail 2.4.0 : Tuesday, 5 July 2022 06:41:55 UTC