- From: Matthew Ström via GitHub <sysbot+gh@w3.org>
- Date: Mon, 20 Jul 2020 16:34:37 +0000
- To: public-design-tokens-log@w3.org
👋 I'm Matt, a designer and developer - I've been exploring how design tokens can be stored, transformed, and accessed in ways that maximize their utility. You can read that work [here](https://matthewstrom.com/writing/design-api-in-practice/). It might be useful to separate some of the individual aspects being discussed (typing, uniqueness, aliasing, order-dependency) from specific language specs and interpreters (CSS, JSON, SASS) ... it'll help clarify some of the discussion and avoid wrestling with the complexity of the existing implementations. I'll try and kick off some of those topics — though, as we get into some of the more "pure" concepts, my understanding gets a little fuzzier, so please correct me if I'm using these terms incorrectly. ## High-level structure I personally think of a design token as a **key-value pair** meaning it consists of two parts: one part is used for reference (looking up the token, talking about the token), and the other is used for application (indicating the color to be used, the font family, the border radius). Is that the mental model that y'all use, too? Might be nice to just put a checkmark in this box :) ## Typing Reading through all the great conversations happening here, it's clear that typing is very important. Not only is it a key to a human-readable and human-writable format, but it's also going to have big impacts on the machines/code that read and write the tokens. The main question around typing: **Should tokens be strongly typed, weakly typed, or not typed at all?** **Strongly typed:** this might involve defining types as part of the spec. A token is only properly-formed if its types are declared and validated at compile time. This makes tokens a little harder to write, but has benefits for performance in the programs that utilize them. **Weakly typed:** this puts the burden of type-checking on the interpreter. Tokens are easier to write, but applications have to do some extra work to check types before utilizing the tokens. **Not typed:** this is some deep theory stuff that I don't understand very well. ## Uniqueness - Should it be safe to assume that a given token is defined once and only once? - If **no (ie, a token might be defined more than once)** , should it be safe to assume that the two values are the same? Some analogies here: In JS, I can't define a `const` more than once. ECMAScript defined this rule to help interpreters be a bit more efficient. In CSS, I can define a rule (like `.token {}`) over and over again. What are use cases for defining a token more than once? What kinds of complications would that introduce to the humans that write and maintain the code, and the machines that have to correctly interpret these definitions? ## Aliasing I've found quite a few use cases for aliasing in writing tokens or using tokens — sometimes it's a lot more convenient to think about the `button-background` token than the `purple-50` token. However, there are some tradeoffs that come with writing aliases into the spec. For instance, what do we do about circular references? ## Order dependency Some current implementations of design tokens produce order-independent token files — Theo and Style Dictionary both work with JSON and YAML, which are essentially associative arrays. Earlier in the conversation, folks have mentioned some operations and use cases that might be order-dependent, like overrides and functions. --- I think that there's a ton of experience we can draw on from the history of other specs and how they were adopted over time — but ultimately it's the answers to these very core questions that will inform the shape and scope of the format specification. -- GitHub Notification of comment by ilikescience Please view or discuss this issue at https://github.com/design-tokens/community-group/issues/1#issuecomment-661162844 using your GitHub account
Received on Monday, 20 July 2020 16:34:39 UTC