Dark Fraction Calculator | New Semantic Automata Chair | Agentic AI Committee Focus

Happy Friday!

Attaching the working paper Jacek and I just finished (Dark Fractions v1),
plus a link to the calculator so you can play with the numbers yourself.

*Short version:* we have a gauge. A general, scalable measure for how much
of a boundary's configuration space is unverifiable. This is the first
piece of the protocol, not a fix, just an instrument to measure in a
general way whether knowledge components are equivalent.

The protocol asks a single question of every variable at a boundary: how
sufficient is our mutual understanding?

Because it is completely dependent on the who & what being unknown, we call
this protocol *Liquid*.

Decision & Information Theory make the same assumption: a shared codebook,
or understanding of concepts in messages. In telegraph based communication,
the sender and receiver share the meaning of symbols and are trained to the
same standard.

In AI Agentic systems operating in and with human workflows across tribal
systems, this assumption is dangerous, precisely because it is *dark *unless
specified and managed. We do not share the same codebook, we intend very
different things with the same word from our personal perspectives.

Those assumptions hold when humans set the context; they break when Agentic
AI fills in the gaps automatically and silently. We call those
uncertainties "Dark" because they sit outside the model. *The Dark Fraction
Theorem* measures how much of the boundary they occupy.

*We now provide a Dark Fraction Calculator and it is general:*
• *Meaning:* what the value refers to
*• Structure:* how it's encoded
*• Context:* the conditions under which Meaning and Structure hold

Data, the raw value, is the fourth facet but crosses the boundary directly
— nothing to register.

*Calculator*:
https://w3c-context-graph-community-group.github.io/dark_fraction/calculator/

Each facet either matches or doesn't between sender and receiver. That
binary comparison is what turns a boundary into geometry. For m variables,
the configuration space is a Hamming cube of 2^(3m) points, and three
states fall out:

*• Null uncertainty: *no facets registered. The axes don't exist. You can't
even ask the question.
*• Dark uncertainty:* axes exist, positions unknown. The system is
somewhere in the cube but doesn't know where.
*• Collapsed uncertainty:* every facet verified. A single known point.

The Dark Fraction δ is the share of the cube that no within-boundary
diagnostic can reach. It's computable exactly from m (variables) and r
(verified facets). Two things to notice:

1. Verification covers a Hamming ball — volume grows polynomially.
2. The cube grows exponentially.

So δ → 1 as you add variables, for any fixed verification budget. Scale
guarantees degradation. That's the core prediction the paper makes — and
the reason a within-boundary gauge matters.

Play with the calculator — drop in a CSV and watch δ move as you register
facets.

You could take every column name from every database your AI systems touch
today, no data, just names, and give your teams a checklist they can use
this week.

[image: Ron-Itelman copy (67).png]


*Why a calculator / checklist first?*

Sometimes, the simplest things have the greatest impact. In healthcare &
hospitals, checklists were found to be the most effective at reducing
preventable mistakes. In Japan, there is an entire tradition of "Pointing &
Calling" to minimize mistakes with spectacular results.

This calculator is something you can give your teams today to start
conversations, education, and that's the best place you can start getting
involved. Have real conversations with people


[image: Screenshot 2026-04-17 at 12.02.29 PM.png]


*New Committee Chair!: Semantic Automata*
I'm excited to share that Indranil Mukhopadhyay, a Principal Architect with
IBM, who leads design & build of large scale distributed data, and is an
IBM Quantum Ambassador, is our new *Semantic Automata Committee Chair.*

We're going to think of how we can make simple automata syntax as simple as
html (or simpler), using math & .txt files, but scoped to specific core
libraries.

Something I'm very excited about in working with Indranil is that he is
passionate about getting the group from a *Community* phase to *Working* phase
W3C Group. He shares the view that we can have significant impact on our
communities and society, which are all getting affected by AI.

As this adoption scales, having reliable shared understanding is critical
for the economic well being of the socio-technological environments they
operate in. Thank you Indranil and thank you to the IBM leadership team for
their endorsement!

*Getting Started: Alex Brown | Agentic AI Committee Chair | Banking*
My goal is to map out a way for us to get started with the Dark Fraction
calculator on real-world, frontier financial AI problems. It doesn't have
to be focused on banking, but grounding us in issues around numbers, time,
and accuracy, as well as understanding what context needs to be pulled in
for the user to succeed for their task are foundational. We should kick
that off in 6 weeks, and will connect to the other programs: knowledge,
decisions, etc.

*Colab:*
https://colab.research.google.com/drive/1iR4tzUVZz6eFo_KSAxpb1iu0mxrkXIDB?usp=sharing

If you haven't emailed me your preference for how you'd like to get
involved, please do! A few sentences or paragraphs, please. I can't read
essays from 90 people! :)

*Up next: Protocol formalization and testing*
We want to test out a system detecting and minimizing uncertainty of shared
understanding, thereby gaining "Context". This is the first start. Mapping
what the next steps are, to minimize uncertainty of misunderstanding will
connect the Semantic Automata, Agentic AI, Decision and Knowledge
Intelligence committees.

Cheers,
Ron



[image: 3.png]

Received on Monday, 20 April 2026 02:41:51 UTC