Re: Semantic Hypermedia Addressing

Adam, Kingsley

Regarding "embeddings", a widely used technique used in NLP / LLMs for
calculating similarity measures, can somebody tell me if a "semantic
embedding" technique is feasible and useful as described in section: "Goal
5: Numerical Inference" of
https://github.com/sebxama/sebxama/raw/refs/heads/main/APPI.pdf?

These embeddings are meant not just for similarity measure but for
contextual relationships traversal and inference. Maybe an MCP enabled
server could leverage this kind of knowledge encoding, easing the task of
an agent building or exploiting the semantic hypermedia addressing graph.

Regards,
Sebastián.


On Sun, Oct 12, 2025, 7:20 PM Kingsley Idehen <kidehen@openlinksw.com>
wrote:

> Hi Adam,
> On 10/12/25 3:00 PM, Adam Sobieski wrote:
>
> Sabastian Samaruga,
> All,
>
> Hello. Being able to reference hypermedia resources within webpages,
> a.k.a., "semantic hypermedia addressing", would be useful and enable some
> approaches for solving "deepfakes" and related challenges.
>
> With (decentralized) annotation capabilities, e.g., via typed hyperlinks
> on annotators' websites or social-media posts, people and organizations
> could annotate specific hypermedia resources as being "deepfakes" or,
> instead, as being "vetted" or "blessed". There may be, for these scenarios,
> more types of annotation links than two Boolean ratings, thumbs-up and
> thumbs-down. Also, these kinds of annotations could be accompanied by
> justification or argumentation.
>
> In addition to performing logical inferencing and reasoning upon
> decentralized and, importantly, paraconsistent collections of such
> annotation links, there is the matter of computing floating-point numerical
> attributes for annotated multimedia resources. That is, from a set of
> annotations from a set of annotators who each have annotation histories,
> these annotators potentially disagreeing with one another, calculate a
> floating-point number between 0.0 and 1.0 for the probability that an
> annotated multimedia resource is, for example, a "deepfake".
>
> Here are two ideas towards delivering the capabilities to reference and to
> annotate hypermedia resources in webpages:
>
> 1) The annotating party or software tool could use selectors from the Web
> Annotation Data Model [1].
>
> 2) The content-providing party could use metadata to indicate canonical
> URIs/URLs for a (multi-source) multimedia resources. This might resemble:
>
> <video canonical="https://www.socialmedia.site/media/video/12345678.mp4"
> <https://www.socialmedia.site/media/video/12345678.mp4>>
>   ...
> </video>
>
> or:
>
> <video>
>   <link rel="canonical" link=
> "https://www.socialmedia.site/media/video/12345678.mp4"
> <https://www.socialmedia.site/media/video/12345678.mp4> />
>   ...
> </video>
>
> Note that, while the example, above, uses a generic social-media website
> URL, social-media services could provide their end-users — individuals and
> organizations — with menu options on hypermedia resources for these
> purposes: to "flag" or to "bless" specific multimedia resources.
>
> Proponents of automation, in these regards, have expressed that rapid
> responses are critical for these annotation scenarios as viral content
> could spread around the world faster than human content-checkers might be
> able to create (decentralized) annotations. Aware of these considerations,
> AI agents and other advanced software tools could use these same
> content-referencing and content-annotation techniques under discussion.
>
> I'm recently brainstorming about approaches including some inspired by the
> Web Annotation Data Model [1] and Pingback [2] which would involve the
> capability to send annotation event data to multiple recipients,
> destinations, and/or third-party services in addition to the
> content-providing websites.
>
>
> Best regards,
> Adam Sobieski
>
> P.S.: As interesting, there are also to consider capabilities for
> end-users and/or AI agents to annotate annotation statements; we might call
> this: "annotation-*" or "annotation-star". These concepts seem to have been
> broached in your second paragraph with: "reifying links"?
>
> [1] https://www.w3.org/TR/annotation-model/
>
> [2] https://hixie.ch/specs/pingback/pingback
>
>
> These capabilities are all achievable now. You can build AI-based Agents
> that perform such reasoning and inference-driven tasks.
>
> I’ll leave you (and any other interested party) with a simple demo of an
> AI Agent — one that leverages LLMs for natural language processing and is
> loosely coupled with a knowledge graph. This design avoids subtle issues
> like an LLM (e.g., Google Gemini) insisting on using terms from
> https://schema.org even when it was explicitly instructed to use
> http://schema.org when generating knowledge graphs.
>
> Demo Links:
>
> [1]
> https://linkeddata.uriburner.com/chat/?chat_id=s-YjsaUP3Ur6gHbWn9Me4wDzsWvg8vbQyP5auMteK359k#asi-46736
> — static page you can scroll through to see how it answered the question.
>
> [2]
> https://linkeddata.uriburner.com/chat/?chat_id=s-YjsaUP3Ur6gHbWn9Me4wDzsWvg8vbQyP5auMteK359k&t=120
> — animated view if you just want to sit back and watch.
>
> The Agent used here was created using natural language via an Agents.md
> Markdown document that defines its planning logic and service/tool
> bindings. Tooling in this case includes Virtuoso Stored Procedures,
> external OpenAPI services, and MCP (Model Context Protocol) servers.
>
> The Agent itself is usable from any client environment that supports MCP
> or OpenAPI—all loosely coupled.
>
> Related
>
> [1] Github Repo -- https://github.com/OpenLinkSoftware/Assistants
>
> [2] Agents.md example (note how reasoning and inference is integrated) --
> https://github.com/OpenLinkSoftware/Assistants/blob/main/basic-agent-in-agents-dot-md-form-template.md
>
> --
> Regards,
>
> Kingsley Idehen 
> Founder & CEO
> OpenLink Software
> Home Page: http://www.openlinksw.com
> Community Support: https://community.openlinksw.com
>
> Social Media:
> LinkedIn: http://www.linkedin.com/in/kidehen
> Twitter : https://twitter.com/kidehen
>
>

Received on Sunday, 12 October 2025 22:52:39 UTC