- From: Leo Barnes <lbarnes@apple.com>
- Date: Fri, 22 Nov 2024 09:35:43 +0100
- To: "Chris Blume (ProgramMax)" <programmax@gmail.com>
- Cc: jbowler@acm.org, "Portable Network Graphics (PNG) Working Group" <public-png@w3.org>
- Message-id: <DB3B049D-6D00-435C-852D-B8D0DFE024BB@apple.com>
> On 22 Nov 2024, at 01:46, Chris Blume (ProgramMax) <programmax@gmail.com> wrote: > > There might be a fundamental problem that we're dancing around. > None of the details matter if we can't iron out the fundamental problem. > But if I get real nitpicky as I read the docs, the problem might actually be fine. And standards work is all about reading in a nitpicky way so this might be fine. > > I mentioned earlier that the charter forces us to not break existing editors & viewers <https://www.w3.org/Graphics/PNG/png-2023.html>. There are plenty of things we could (and really want to) fix but can't because of this. > However, the charter specifically says "...editors, or viewers that conform to [the previous spec]." > So technically, if an implementation does not conform to the spec, we're allowed to break it. (Said another way, we just can't break old specs.) > > > libpng's cHRM implementation calls png_get_fixed_point() <https://github.com/pnggroup/libpng/blob/c1cc0f3f4c3d4abd11ca68c59446a29ff6f95003/pngrutil.c#L1281>, which returns PNG_FIXED_ERROR if the high bit is set <https://github.com/pnggroup/libpng/blob/c1cc0f3f4c3d4abd11ca68c59446a29ff6f95003/pngrutil.c#L59>. It does not return the value. > So even if we change cHRM to allow signed values, existing libpng implementations will break. > That is the fundamental issue. > > (Same for lodepng and spng, but they break in a different way.) > > > HOWEVER, here is where the nitpicky reading might help us. > The current spec wording says the range is limited to [31 bits] <https://w3c.github.io/png/#dfn-png-four-byte-unsigned-integer>. That *could* mean the high bit is zero, which keeps the range of the 32-bit value within 31 bits. Or it could mean that only 31 bits are used. Given this note in that section, I think there's a reasonably strong indication that the former is implied: > The restriction is imposed in order to accommodate languages that have difficulty with unsigned four-byte values. If the highest bit was non-zero, you would hit the issue that the restriction is trying to work around. > > If we interpret as the latter, the high bit's value is unspecified by the standard. And a conforming implementation would ignore it (where a zero or a one). > libpng, lodepng, and spng all assume (or enforce) that the high bit is zero. That would mean they are non-conforming if we use the second interpretation. Which would mean we can break them and allow this change! > > But, I anticipate that second interpretation would be a difficult pitch. > libpng, lodepng, and spng all assume the high bit is zero, AND a very valid interpretation is that the 32-bit number simply has a range restriction (which is probably the more likely interpretation). > At a minimum, we would need to face a strong argument that there is an established interpretation. MPEG often deals with changes like this. If I treat the PNG spec like I would treat the HEIF spec for example, I would argue it like this: Given the existing wording in the PNG spec, I would consider the highest bit in the cHRM to be considered to be a reserved bit. Its value shall be zero (which is enforced by current implementations) and parsers shall reject files where it's not zero. But it's perfectly acceptable for future versions of the spec to assign a new meaning to the reserved bit being one. Existing implementations will reject these files as not conforming to the current version of the spec, but that is fine, this is a new feature after all. Writers that want to make use of it need to take care that older implementations may reject their files, but this is true with all new features. I again consider CICP to be a good example. It's full of reserved values that we expect will be defined in a future spec. Parsers that don't understand them should ideally not try to display the file since it could look extremely wrong if the CICP is ignored. To me, not breaking backwards compatibility comes down to the following: 1. Existing files that were compliant according to the old version of the spec shall continue to be considered compliant in the new version of the spec. 2. Existing parsers that were compliant according to the old version of the spec shall continue to be considered compliant in the new version of the spec, for files that are compliant to the old version of the spec. Files only compliant to the new version of the spec don't really count. Or to really summarize: You want existing files to continue working the same as they always have, and new files only using old features to work in existing implementations. Cheers, //Leo > > I'll try to talk to some higher ups at W3C about the flexibility of "break". I've been meaning to, anyway. > (I know this sounds silly, but it actually isn't that clear. New CSS specs come out with additions that an old browser cannot display. This might be okay in CSS-land because it silently ignores things it doesn't know. But then what about new ECMAScript specs? An old browser's JS engine would stop execution when it encountered 'await' without understanding it. So maybe "break" just means old content continues to work on old programs? Maybe there is flexibility?) > > On Thu, Nov 21, 2024 at 2:22 PM John Bowler <john.cunningham.bowler@gmail.com <mailto:john.cunningham.bowler@gmail.com>> wrote: >> >PNG decoders use unsigned 32-bit and just assume the high bit will not be set. They would still see a large value. >> >> _And they will continue to see it_. Such PNG cHRM chunks already exist; any black-hat worth their state pension has already produced lots of them. What is more the fuzzers have too; I know because the people who operate the libpng fuzzer have detected issues in libpng this way. Anyone with a binary editor can do this (search for "cHRM", set the following byte to 128...) >> >> Notice that I'm not talking about "ACES" I'm talking about the chromaticities ACES uses; I'm using these chromaticities as **examples** because they are in use in the real world and one of them, ACES AP1 (for ACES-cg) is used in PNG files. I had to fix a bug in libpng 1.6 a couple of months ago which erroneously rejected it; AP1 produces a perfectly valid cHRM but it has a negative "z" for the "red" end point. ACES AP0 is used in ACES 2065-1 which is a candidate for cICP but AP0 is just a cHRM chunk! >> >> That said what this means is that all existing implementations that handle valid PNGs have to handle negative numbers in at least "z". Skia, for example, does seem to invert the cHRM to retrieve the CIEXYZ (from CIExyY with Y=1) using floating point and libpng provides an API to do so which is implemented in signed 32-bit fixed point (decimal 5dp). The negative values mean any conformant PNG decoder which handles cHRM must already handle negative values even if the (x,y) alone are all positive; if it doesn't that's a CVE. >> >> Notice that a "PNG decoder" is the whole system which decodes a PNG stream into the canonical data. Since there seem to be a lot of TV guys here I'd like to point out that the decoder includes the TV. The same applies to print operations; PNG is a perfectly acceptable format in the data an app sends to a print device. The same applies to 3D ray tracers; PNG is frequently used to encode the multiple bitmaps used for "texture mapping" including colour bitmaps for reflection and transmission (sometimes more than one for each case!) Such software frequently uses PNG as an output format and sometimes this is not tone-mapped, ACES-cg is then one encoding (including the ACES AP1 profile) which might be used and which will generate an end-point (red) with a negative Z. >> >> It is very easy to incorporate broken PNG files into documents, look at the "corrupted" files page in the PNG test suite: http://www.schaik.com/pngsuite/ nothing changes if a meaning is defined for a broken file until an app writer changes their code and that suggested definition is more likely to fix something that break it! >> >> It is also worth nothing that the ICC profile specification (current[https://www.color.org/specification/ICC.1-2022-05.pdf] and all previous versions) has always allowed negative values for the **illuminant** even though it is mandated to be D50 (maybe they changed that recently)! In general the ICC allows XYZ values to be negative, see the definition of XYZNumber in 4.14 and the prior definition of s15Fixed16Number in 4.6. chromaticity values are limited to the range [0,2) but are used for real colours which describe real-world devices (I think) - similar to mDCv which has a range of [0,1.3107]. PNG fixed point has a range of approximately [-21472, +21472] and ICC s15Fixed16Number has a range of [-32768, 32768), so slightly larger than PNG fixed point. >>
Received on Friday, 22 November 2024 08:36:24 UTC