- From: Adam Sobieski <adamsobieski@hotmail.com>
- Date: Tue, 28 Oct 2025 05:23:20 +0000
- To: Alan Karp <alanhkarp@gmail.com>, Daveed <daveed@bridgit.io>
- CC: Gaowei Chang <chgaowei@gmail.com>, george <george@practicalidentity.com>, public-agentprotocol <public-agentprotocol@w3.org>
- Message-ID: <DS4PPF69F41B22EF3B786D31A0E9B51531AC5FDA@DS4PPF69F41B22E.NAMP223.PROD.OUTLOOK.C>
Alan, Daveed, Gaowei, George, All, The "space" concepts remind me of ad-hoc "rooms" into which agents could enter to then be able to readily detect one another and interact. Also, if I understand correctly, spaces could have door-agents which check other agents as they attempted to enter, reducing the trust-checks required from NxM to N+M. Starting simply: is an agent attempting to enter a space on that space's door-agent's list? Interestingly, these appear to be two related problems: (a) the contents of spaces' .meta files, and (b) the contents of agentic user-permission requests. Brainstorming about the natural-language and/or JSON contents of spaces' .meta files and agentic user-permission requests, these might both include: 1. User-facing content (e.g., "Your agent, A, in role R1, and party P's agent, B, in role R2, would be allowed to interact to accomplish specified described tasks, T, for a maximum duration, D"). 2. Agent-facing content. 3. Other data of use to agents' and/or observers' other guardrail systems. Ideally, 1 and 2, above, could be scanned, processed, and verified as being sufficiently similar. Did that natural-language content for the end-user to agree to adequately represent any other content which was intended to be relayed to the AI agents upon the granting of the permission? I'm seeing some interesting potential similarities between spaces' .meta descriptions and other agentic user-permission data, broadly, these each potentially including both natural-language and JSON content. Best regards, Adam ________________________________ From: Alan Karp <alanhkarp@gmail.com> Sent: Monday, October 27, 2025 10:51 PM To: Daveed <daveed@bridgit.io> Cc: Gaowei Chang <chgaowei@gmail.com>; george <george@practicalidentity.com>; public-agentprotocol <public-agentprotocol@w3.org> Subject: Re: Discussion on agent discovery mechanisms — do you have any better proposals? Matchmakers enable another desirable feature. They can control who has permission to discover a particular agent. (It's hard to attack something if you can't find out that it exists.) -------------- Alan Karp On Mon, Oct 27, 2025 at 4:30 PM Daveed <daveed@bridgit.io<mailto:daveed@bridgit.io>> wrote: Apologies for the delay! I’ve been deep in coding work, but I’m excited to pick this up again and integrate the recent developments. @Alan Karp, @Gaowei Chang: thank you both. Alan, your point about matchmaking as a trust optimization(reducing NxM to N+M) gives this model operational teeth. Gaowei, your prompt to clarify what “space” means and to show a concrete use case is exactly right; the “space” is a web-anchored overlay (a contextual zone bound to a page, section, or concept) where aligned actors become visible to each other. We consider this space to be a "meta-layer" above the web page that is now emerging as a place for humans and agents to interact. (You can think of this as page-specific overlays with a z-index greater than of the content layer that are activated by a browser extension or eventually meta-layer enabled browser.) The AI part of AI browsers operates in this space as a sidebar above webpages. We see the possibility of humans and agents to meet, interact, and collaborate in this space as well. Framing: The Meta-Layer in Context The ideas outlined here align directly with the direction of ongoing standards development. The recent IETF Internet-Draft, The Meta-Layer: A Coordination Substrate for Presence, Annotation, and Governance on the Web<https://datatracker.ietf.org/doc/draft-meta-layer-overview/> (draft-meta-layer-overview-00, October 2025), frames the Meta-Layer as an emerging coordination substrate for trust, interaction, and shared context above the Web. This effort, together with the foundational work described in The Metaweb: The Next Level of the Internet<https://www.routledge.com/The-Metaweb-The-Next-Level-of-%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20the-Internet/DAO/p/book/9781032125527> (Bridgit DAO, Routledge, 2023), demonstrates that the concept is moving from theory to active infrastructure design. Within this framing, our proposal explores how AI agents - operating via MCP or similar protocols - can participate safely and meaningfully in this new layer. As the Meta-Layer infrastructure is being standardized, it is crucial to define how agents can discover one another, respect consent and governance zones, and collaborate with humans through context-aware overlays rather than traditional embedding. We welcome collaboration with groups such as the W3C Agent Protocol and Hypermedia MAS communities to develop these interoperability standards. The Core Idea: Contextual Discovery as Just-in-Time Matchmaking Traditional discovery models rely on either centralized registries (e.g., Agent Name Services) or decentralized announcements (e.g., ANP). Both are static in nature—they make agents findable, but not necessarily contextually relevant. The fourth model introduces context-triggered, in-situ discovery, where agents surface only when their interaction profiles align with a live context. This model reframes discovery as ambient matchmaking rather than search. Instead of looking through a directory, an agent “notices” who or what else is co-present in the same semantic space and determines whether to interact based on shared focus, intent, and consent. A Use Case: Contextual Matchmaking on a Webpage Imagine a government report webpage with a section on Carbon Credit Mechanisms. This section hosts a meta-domain overlay—for example, gov/climate/carbon-credits.meta—that defines a contextual space for deliberation. * A policy analyst agent (with an MCP profile indicating domain expertise in climate_policy.modeling) accesses the page through a simulated browser session. * A regulatory DAO agent responsible for policy compliance does the same. * A crypto trading agent, while technically discoverable through registries, lacks scope alignment. The overlay system evaluates each arriving actor’s MCP profile—checking for role, topic alignment, and consent posture—through the Meta-Layer infrastructure, not HTML embedding. The first two agents are mutually discoverable within this overlay, while the third remains invisible. This transforms visibility from a universal property into a contextual privilege. In practice, this means discovery and trust are coupled. Discovery surfaces co-presence, while trust governs interaction through filters like credentials, intent signatures, or shared governance policies. Matchmaking and Trust: Reducing the NxM Problem Alan’s framing captures this perfectly: traditional discovery implies an NxM trust problem—every actor must trust or evaluate every other. Matchmaking reduces that to N+M by introducing intermediary trust brokers. In the Meta-Layer model, semantic overlays themselves become those brokers. Each overlay (or meta-domain) operates as a civic matchmaker, enforcing relevance and reciprocity through: * Profile evaluation (via MCP extensions like Semantic Anchors, Consent Zones, and Role-Aware Filters) * Governance logic (encoded by communities or domains) * Scoped visibility (only aligned actors are discoverable) This creates a trust topology where discovery is earned, not assumed. Overlays act as contextual trust filters, mediating both presence and interaction rights. The Fourth Model in Context Model Discovery Mechanism Example Limitation 1. Central Registry Directory query ANS, DNS-based lookup Centralized, static 2. Decentralized Broadcast P2P announcements ANP, DID networks No context filtering 3. Hybrid Registry + local .well-known pointers Still global, not situational 4. Contextual Matchmaking Presence-triggered via overlays Meta-domains + MCP alignment New, but scales with web context The fourth model complements the others: registries and networks make agents findable, while the Meta-Layer makes them contextually relevant. Importantly, agents are not embedded directly within HTML. Instead, they use simulated browser environments or API access to interact with web infrastructure—just as a human browser would—allowing them to operate safely and independently from the front-end overlay. How This Works Technically (Browser Overlay and Contextual Infrastructure) Activation and scope * A browser extension or site integration detects eligible pages using URL patterns, semantic cues, or /.well-known/overlays hints. * Scope can be defined by page, section (e.g., section[id="carbon-markets"]), or concept (via tags or annotations). Each scope maps to a meta-domain (e.g., gov/climate/carbon-credits.meta). Infrastructure and communication * Overlays create a rendering layer for humans (not for agents). Agents interface with the same contextual data via APIs, MCP servers, or simulated browser sessions. * The overlay provides signals of active meta-domains, enabling agents to align through backend services rather than direct DOM presence. * Matching occurs in the Meta-Layer via profile alignment between agents and overlay-defined parameters: Semantic Anchors, Presence Thresholds, Consent Zones, Role-Aware Filters, and Intent/Preference profiles. Event and policy flow 1. Page or context loads → the overlay or API endpoint detects meta-domain scope. 2. The system fetches MCP profiles (actor and location) via decentralized IDs and signed endpoints. 3. The Meta-Layer evaluates alignment across role, context, and consent policies. 4. If aligned, visibility and coordination become possible within that context. 5. All interactions are policy-checked and logged through governance protocols. Security and privacy * CSP-safe communication; strict allow-lists for data access. * No DOM embedding or scraping: overlays read only permitted structural data. * Agents rely on authenticated Meta-Layer APIs or simulated screen to simulate presence or interaction. Interoperability * Compatible with registry-based discovery (ANS/DNS), decentralized discovery (ANP/DIDs), and .well-knownmethods. The overlay filters and mediates based on shared context and policies. Looking Ahead Combining these approaches could create a layered discovery fabric: 1. Global lookup establishes baseline discoverability. 2. Local context overlays mediate actual co-presence and trust. 3. MCP-based matchmaking ensures agents interact only when their purposes, roles, and consent align. This hybrid model could evolve into a practical framework for trust-aware discovery and interaction, aligning naturally with ongoing MCP evolution and the goals of the Hypermedia Multi-Agent Systems community. Let me know if this makes sense. Warmly Daveed Benjamin Founder Bridgit.io<http://bridgit.io/> daveed@bridgit.io<http://null> daveed@nos.social<http://null> +1 (510) 326-2803 (Whatsapp) +1 (510) 373-3244 (Voicemail) Book meeting<https://daveed-bridgit.zohobookings.com/#/customer/shiftshapr> The Metaweb - The Next Level of the Internet<https://bridgit.io/metaweb-book> was published by Taylor & Francis in late November, 2023. ---- On Mon, 13 Oct 2025 09:26:55 -0700 Alan Karp <alanhkarp@gmail.com<mailto:alanhkarp@gmail.com>> wrote --- (Sorry for the late reply. You ended up in my spam folder.) This allows for both pull-based discovery (someone finds an agent) and push-based ambient matchmaking (agents surface in context when relevant). I see a lot of potential synergy between the two. I think "matchmaking" is an important part of the model. One solution to the trust problem is matchmakers that advertisers and intent casters have trust relationships with. That turns an NxM problem into an N+M problem. -------------- Alan Karp On Wed, Oct 8, 2025 at 9:02 AM Daveed <daveed@bridgit.io<mailto:daveed@bridgit.io>> wrote: Thanks, George and Alan - both of your points sharpen this nicely. Yes, I do think of the “fourth model” as a kind of just-in-time discovery, but one that happens within context, rather than only through a global lookup. It’s less like searching a directory, and more like noticing who (or what) is co-present in the same room, with the option to interact based on relevance and consent. To George’s question: I see trust as orthogonal to discovery, but adjacent. Discovery might surface the fact of presence, while trust governs the terms of interaction. For example, a client might notice an MCP agent is “here” via overlay or signal, but any meaningful engagement could still require mutual proof, policy negotiation, or scoped permissions. These could be brokered by the MCP protocol itself or governed via community-defined rules (like trust tags or consent stacks). And Alan, I really appreciate your point about intent publishing. I see “in-place presence” as one substrate for intent-aware matchmaking. If agents can both express capabilities and be visible in relevant contexts, we can move toward shared presence-based coordination, where agents and humans encounter each other through overlapping focus and published goals, not just search queries. This allows for both pull-based discovery (someone finds an agent) and push-based ambient matchmaking (agents surface in context when relevant). I see a lot of potential synergy between the two. Daveed Benjamin Founder Bridgit.io<http://bridgit.io/> daveed@bridgit.io<http://null> daveed@nos.social<http://null> +1 (510) 326-2803 (Whatsapp) +1 (510) 373-3244 (Voicemail) Book meeting<https://daveed-bridgit.zohobookings.com/#/customer/shiftshapr> The Metaweb - The Next Level of the Internet<https://bridgit.io/metaweb-book> was published by Taylor & Francis in late November, 2023. ---- On Wed, 08 Oct 2025 08:53:10 -0700 Alan Karp <alanhkarp@gmail.com<mailto:alanhkarp@gmail.com>> wrote --- In addition to agents publishing the things they can do, users and agents can publish their intents, the things they want done. It then becomes a matter of matchmaking, which is somewhat different from discovery. -------------- Alan Karp On Wed, Oct 8, 2025 at 8:34 AM <george@practicalidentity.com<mailto:george@practicalidentity.com>> wrote: Is it fair to consider this “fourth” model a “just in time” discovery kind of mechanism? If so, how is the trust established between the client and the MCP Server? Or is “trust” considered orthogonal to the discovery aspect? George Fletcher Identity Standards Architect Practical Identity LLC On Oct 8, 2025, at 10:22 AM, Daveed <daveed@bridgit.io<mailto:daveed@bridgit.io>> wrote: Gaowei Chang, As a fourth approach, I’d like to propose supporting agent discovery through contextual presence—allowing people and agents to become visible to one another directly in relation to the same web content or interaction space. Instead of requiring centralized registration or domain-level declarations, this model enables agents to “show up” where they are active or relevant, such as on a specific page, app, or dataset. It gives both humans and agents the ability to discover one another in place, based on shared focus or attention. Presence could be ambient, filtered, and consent-based—supporting real-time encounters, asynchronous trails, or mission-driven proximity. This could be a powerful complement to registries: a way to meet the right agent at the right time, exactly where and when it matters. Daveed Benjamin Founder Bridgit.io<http://bridgit.io/> daveed@bridgit.io daveed@nos.social +1 (510) 326-2803 (Whatsapp) +1 (510) 373-3244 (Voicemail) Book meeting<https://daveed-bridgit.zohobookings.com/#/customer/shiftshapr> The Metaweb - The Next Level of the Internet<https://bridgit.io/metaweb-book> was published by Taylor & Francis in late November, 2023. ---- On Thu, 25 Sep 2025 19:10:57 -0700 Gaowei Chang <chgaowei@gmail.com<mailto:chgaowei@gmail.com>> wrote --- Dear all, I originally wanted to discuss the issue of agent discovery at our last meeting, but we ran out of time. Let’s continue the discussion here by email. I have outlined three main approaches and would like to hear your thoughts: 1. Based on RFC 8615 (.well-known path) Place a standardized file under the domain’s /.well-known/ path to declare the agents available under that domain. * Pros: Mature standard, easy to deploy, compatible with DNS/TLS, decentralized. * Cons: Limited to existing domains, lacks global indexing, less friendly for individual users without domains. 2. Global Registration Center Establish a centralized registry for agents, such as an MCP Registry or an Agent Name Service (ANS). * Pros: Strong discoverability, good user experience, standardized naming and classification, easier governance. * Cons: Higher centralization risks, requires governance and maintenance, may introduce entry barriers, scalability challenges. 3. Blockchain-like Decentralized Approach Use decentralized infrastructures such as blockchain, DHT, IPFS, or ENS to store and discover agent information. * Pros: Decentralized, censorship-resistant, data integrity, global discoverability, can integrate with DID/VC systems. * Cons: Complex to implement, performance and cost issues, ecosystem still immature. Which approach do you prefer? Best regards, Gaowei Chang
Received on Tuesday, 28 October 2025 05:23:27 UTC