[community-group] How should tools process higher fidelity values than they can handle internally? (#157)

c1rrus has just created a new issue for https://github.com/design-tokens/community-group:

== How should tools process higher fidelity values than they can handle internally? ==
### Background

The discussions in #137 have raised an interesting question: What is the expected behaviour of tools that only support "traditional" 24bit sRGB colors, when they encounter color tokens whose values have wider gamuts or higher depths than the tool can handle internally?

I think we will encounter variations of the same question for other types too. For example, how should a tool that only understands pixel dimensions deal with values expressed in `rem`? Or, how should a tool that only supports setting a single font family when styling text deal with a token that provides an array of font values? I suspect this kind of question could arise for new types that get added in future versions of the spec too.

I therefore think it would be a good idea for our spec to define some generalised rules around what the expected behaviour should be for tools whenever they encounter tokens that have higher fidelity values than they are able to process or produce internally.

### Requirements

I believe the overarching goal of our format is **interoperability**:

* Any tool that can _read_ tokens files must be able to successfully read _any_ valid tokens file and interpret all _relevant_ tokens as intended.
* Any tool that can _write tokens files must onky write valid tokens files, so that they can be read by any other tool.

I intentionally say "_relevant_" tokens, as I believe it's perfectly acceptable for a tool to only operate on a subset of token types. For example, if we imagine a color palette generating tool like [Leonardo](https://leonardocolor.io/) added the ability to read tokens files, then I'd expect it to only surface color tokens to its users and just ignore any other kinds of tokens that might be in the file.

Therefore our spec needs to specify just enough for that to become possible. Any tool vendor should be able to read our spec and write code that can successfully read or write valid tokens files. Any human author should be able to read our spec and write or edit valid tokens files which will then work in any tool.

When we get down to the level of token values, I believe this means:

* Any tool that can read tokens files MUST be able to accept any valid value of relevant tokens and map that value to whatever its internal representation is

The question I'd like us to discuss in this issue is: What should tools do when their internal representation of token values has a lower fidelity than what is permitted in tokens files?

I don't believe "tool makers should improve their internal representation" is a viable option though. In my view, **interoperability is worthless without widespread adoption**. However, there are lots of _existing_ tools out there that could benefit from being able to read/write tokens files (e.g. UI design tools like Figma, Xd, Sketch, etc.; DS documentation tools like zeroheight, InVision DSM, Supernova, etc.; Color palette generators like Leonardo, ColorBox, etc; and so on). There's a good chance they each very different ways of representating values like colors, dimenstions, fonts, etc. internally. It wouldn't be reasonable for our spec to necessitate them changing how their internals work and we can't assume, even if they wanted to do so, that it's quick or easy to achieve.

At the same time, I don't want our spec to become a lowest common denominator. That would reduce its usefulness to everyone. It might also lead to a proliferation of `$extensions` as a result of teams and tools working around limitations of the format. While, I think having _some_ `$extensions` being used in the wild is healthy and could highlight areas future versions of the spec should focus on, having too many might lead to a situation where our standard format splinters into several, incompatible de-facto standards, each supported by different subsets of tools. That would hurt interoperability and, IMHO, suck!

### Use-cases

Very broadly, I think tools that do stuff with tokens files can be divided into 3 categories:

* Tools that only write tokens files
* Tools that only read tokens files
* Tools that read and write tokens files

For the purpose of this issue, I think it's worth considering each case individually

#### Write-only tools

If a tool internally only supports lower fidelity values than what can be expressed in the format, I don't see a problem. As long as every value those tools can produce can be accurately expressed in the DTCG format, I don't think it matters that there are other values that could be expressed in the format.

Furthermore, if our format mandates a particular syntax for the value, but how the tool chooses to display or prompt for that value uses an alternate syntax, that's not a problem. Converting between equivalent syntax is easy to implement and so I _do_ believe it's acceptable to expact that tool makers convert values where needed when writing them.

**Color example**
A UI design tool internally only supports "traditional" 24bit RGB colors in the sRGB color space. The user defines a color token in that tool - e.g. via a color picker, or by typing in RGB values - and then wants to export that to a `.tokens` file.

_If_ our spec also supported other color spaces and/or color depths (note: the current 2nd editors draft does not), that tool could still save our the exact color the user chose.

**Dimension example**
A modular scale generator only supports generating (viewport) pixel values. The user sets a base sizes and multiplier and tool generates a set of spacing values for them. The user wants to save out that spacing scale to a `.tokens` file.

The format supports `px` values, so those values can be accurately saved out. The fact that the format also supports `rem` values is irrelevant in this use-case.


#### Read-only tools

If can only read tokens from a `.tokens` file, to then be used within that tool but, internally, it only supports a lower fidelity than what can be expressed in the DTCG format then the following situations may occur:

* The tokens have values that map 1:1 to something the tool can represent internally
* The tokens have values that exceed what the tool can represent internally

In the first case, there is no issue - the tool can just use the original value as is. In the second case, the tool should convert the original token value to the closest approximation that it can handle internally.

Theoretically the tool could reject the token too, but I think our spec should disallow that. If a file contains N number of relevant tokens, I think it's reasonable for all N tokens to be used by that tool. However, where the tool needs to do some kind of lossy conversion of the values, I think tools should be encouraged to notify the user. E.g. they might display a warning message or equivalent to indicate that approximations of some tokens values are being used.

**Color example**
A UI design tool internally only supports "traditional" 24bit RGB colors in the sRGB color space. The user loads a `.tokens` file that contains some color tokens whose values have have been defined in a different color space and are out of gamut for sRBG.

In this case the tool should perform a lossy conversion of those colors to their nearest equivalents in the sRGB space that it supports. It's up to the tool maker to decide _when_ that conversions takes place. It could happen as the file is loaded - all out of gamut colors are converted at that point and that's what the tool uses thereafter. Alternatively, if it makes sense for that tool's internal implementation, it could preserve the original value from the token file but convert it on the fly whenever that value is used or displayed in the tool.

Either way though, the tool should try to inform the user what has happened. For example, when the `.tokens` file is first loaded, it might display a message saying that tokens X, Y and Z had out of gamut value and they have been converted to their closest equivalents.

**Dimension example**
A UI design tool internally only supports (viewport) pixel values when setting dimensions (e.g. widths, heights, coordinates, border thicknesses, font sizes, etc.). The user loads a `.tokens` file that contains some dimension tokens whose values have have been defined as `rem` values.

Since the tool lacks the concept of dimensions that are relative to an end-user's default font size settings, it needs to perform a lossy conversion of those rem values to appropriate, absolute pixel values. Since most web browsers' default font size is 16px, converting N rems to 16 * N px is likely to be an appropriate method to use. The token values are converted and thereafter the user only sees the corresponding px values in the tool. As with the color example, _when_ that conversion happens is up to the tool maker.

Again, the tool should try to inform the user what has happened. For example, when the `.tokens` file is first loaded, it might display a message saying that tokens X, Y and Z used rem values and they have been converted to pixels by using an assumed default font size of 16px.


### Read and write tools
This is a special case because such tools may be used to read tokens from a file, manipulate that set of tokens somehow and then write the result back out. The following edge cases therefore need to be considered:

Imagine a `.tokens` file contains design tokens A, B and C. These tokens have higher fidelity values than the tool can handle internally. Consider these use-cases:

1. The tool loads that file, the user then adds a new token, D, via the tool and then saves out the full set of tokens (A, B, C and D)?
2. The tool loads that file, the user then deletes token C and saves out the remaining tokens (A and B)?
3. The tool loads that file, the user modifies the value of token A in the tool and then saves out the full set of tokens (A (with its new value), B and C)?

Should the values of the tokens which the user has not touched (for example the tokens A, B and C in the first case) still have their original (high fidelity) values, or is it acceptable for them to have been replaced by their nearest lossy equivalents?

The latter is probably easier for tool vendors to handle. If they follow the rules I outlined in the "Read-only tools" section above, then they will have done a lossy conversion when importing the tokens values into the tool's internal representation. When that is later saved out, the original high-fidelity value has been lost so, as per the "Write-only tools" rules those lossy values are saved out.

However, I think this is sub-optimal from the user's perspective. If they never edited a token in the tool, it feels wrong for some lossy conversion to have been applied to those tokens' values "behind the user's back". Furthermore, if we take the view that design tokens represent design decisions, one could argue that the tool is changing those decisions without the user's consent.

Btw, a related scenario is tools which only operate on certain token types. Imagine a `.tokens` file that contains design tokens X, Y and Z. X is of type `color`, Y is of type `cubicBezier` and Z is of type `fontFamily`. The user loads the token file into a tool for creating and editing animation timing functions. Only token Y is relevant to that tool, so it ignores tokens X and Z and never displays them to the user anywhere in its UI. Consider the same kinds of uses cases as above - the user adds a another `cubicBezier` token and saves it back to the tokens file, or the user edits the value of token Y and saves it back to the tokens file.

Should tokens X and Z still be present in the file? I'd argue yes. I think it would be confusing to users if those tokens just vanished when, from their perspective, all they were doing was using a specialised tool to tweak the `cubicBezier` tokens.

Therefore, I think tools that read _and_ write tokens files need to have the following behaviour in addition to the read-only and write-only rules outlined the previous sections:

- When reading a `.tokens` file, the tool must keep a copy of all tokens (regardless of whether they are relevant or not to that tool) along with their _original_ values (even if thoser are higher fidelity than what the tool can handle internally).
- It can perform lossy conversions an relevant tokens as needed (as per the "Write-only tools" rules) and present only those converted values to users to use within the application. However, it should keep track of whether or not the tokens value was modified by the user.
- If the user deletes a token, the copy of token's orginal value should also be discarded
- When writing a `.tokens` file, the tool must write out the full set of tokens. For each token:
    - If the value was modified, the new value set via the tool should be exported as per the "Write-only tools" rules
    - If the value was not modified, the original value should be exported
    - If it is a new token created in the tool, its value should be exported as per the "Write-only tools" rules
    
   
While this will add some complexity for tool makers, I believe this kind of functionality should be achievable without needing to drastically change the internals of the tool. The copies of unused tokens and original values could be kept "outside" of the tools existing internals. The tool would just need to maintain some kind of mapping between its internal values and the corresponding "originals".


What do you all think?

Please view or discuss this issue at https://github.com/design-tokens/community-group/issues/157 using your GitHub account


-- 
Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config

Received on Wednesday, 6 July 2022 10:45:53 UTC