Re: Semantic Hypermedia Addressing

Adam,

Regarding your numerical inference approach, please take a look at "Goal 5:
Numerical Inference" of the attachment document (assigning prime number
identifiers to URIs and their SPO occurrences for performing embeddings and
inference). Note: the attachment is an early draft of the StratML for
another project. That part ("Goal 5") is Gemini inspired in a chat about
FCA lattices and prime number identifiers (
https://jfsowa.com/logic/math.htm#Lattice). Didn't have time to implement
or validate but seems promising.

The task of building the Semantic Hypermedia Addressing knowledge network
of resources should be automated as a self-supervised learning task of the
LLM. Having content (hypermedia) annotated and linked seems to me as one of
the adoption barriers for the Semantic Web. This meta-task of the AI should
be leveraged further, having this content exposed and browsable (Linked
Data / APIs) but also being feed back / fine tuned into the LLM.

For the de-centralization part of the problem I believe that W3C DIDs
(Decentralized Identifiers) should be taken into account.

And, yes, addresses, annotations and links should also be resources.

Regards,
Sebastián.


On Sun, Oct 12, 2025, 4:00 PM Adam Sobieski <adamsobieski@hotmail.com>
wrote:

> Sabastian Samaruga,
> All,
>
> Hello. Being able to reference hypermedia resources within webpages,
> a.k.a., "semantic hypermedia addressing", would be useful and enable some
> approaches for solving "deepfakes" and related challenges.
>
> With (decentralized) annotation capabilities, e.g., via typed hyperlinks
> on annotators' websites or social-media posts, people and organizations
> could annotate specific hypermedia resources as being "deepfakes" or,
> instead, as being "vetted" or "blessed". There may be, for these scenarios,
> more types of annotation links than two Boolean ratings, thumbs-up and
> thumbs-down. Also, these kinds of annotations could be accompanied by
> justification or argumentation.
>
> In addition to performing logical inferencing and reasoning upon
> decentralized and, importantly, paraconsistent collections of such
> annotation links, there is the matter of computing floating-point numerical
> attributes for annotated multimedia resources. That is, from a set of
> annotations from a set of annotators who each have annotation histories,
> these annotators potentially disagreeing with one another, calculate a
> floating-point number between 0.0 and 1.0 for the probability that an
> annotated multimedia resource is, for example, a "deepfake".
>
> Here are two ideas towards delivering the capabilities to reference and to
> annotate hypermedia resources in webpages:
>
> 1) The annotating party or software tool could use selectors from the Web
> Annotation Data Model [1].
>
> 2) The content-providing party could use metadata to indicate canonical
> URIs/URLs for a (multi-source) multimedia resources. This might resemble:
>
> <video canonical="https://www.socialmedia.site/media/video/12345678.mp4">
>   ...
> </video>
>
> or:
>
> <video>
>   <link rel="canonical" link="
> https://www.socialmedia.site/media/video/12345678.mp4" />
>   ...
> </video>
>
> Note that, while the example, above, uses a generic social-media website
> URL, social-media services could provide their end-users — individuals and
> organizations — with menu options on hypermedia resources for these
> purposes: to "flag" or to "bless" specific multimedia resources.
>
> Proponents of automation, in these regards, have expressed that rapid
> responses are critical for these annotation scenarios as viral content
> could spread around the world faster than human content-checkers might be
> able to create (decentralized) annotations. Aware of these considerations,
> AI agents and other advanced software tools could use these same
> content-referencing and content-annotation techniques under discussion.
>
> I'm recently brainstorming about approaches including some inspired by the
> Web Annotation Data Model [1] and Pingback [2] which would involve the
> capability to send annotation event data to multiple recipients,
> destinations, and/or third-party services in addition to the
> content-providing websites.
>
>
> Best regards,
> Adam Sobieski
>
> P.S.: As interesting, there are also to consider capabilities for
> end-users and/or AI agents to annotate annotation statements; we might call
> this: "annotation-*" or "annotation-star". These concepts seem to have been
> broached in your second paragraph with: "reifying links"?
>
> [1] https://www.w3.org/TR/annotation-model/
>
> [2] https://hixie.ch/specs/pingback/pingback
>
> ------------------------------
> *From:* Sebastian Samaruga <ssamarug@gmail.com>
> *Sent:* Sunday, October 12, 2025 1:27 PM
> *To:* Kingsley Idehen <kidehen@openlinksw.com>
> *Cc:* W3C Semantic Web IG <semantic-web@w3.org>; W3C AIKR CG <
> public-aikr@w3.org>; public-lod <public-lod@w3.org>
> *Subject:* Re: Semantic Hypermedia Addressing
>
> Great! Seems like I'm in the right direction then. LLMs could do that and
> a bunch of other amazing stuff by their "massive brute force" approach that
> makes them seem "inteligent".
>
> However, what if we ease things for machines a little? Reifying addresses
> and links as resources on their own, contextually annotable, addressable
> and linkable, with HTTP / REST means of interaction for their browsing and
> (link) discovery, having developed a schema on which render the
> representations of those resources. That's a task in which LLMs could
> excel. Kind of "meta" AI task, call it "semantic indexing".
>
> Having this "Semantic Hypermedia Addressing" knowledge layer rendered, in
> RDF resources for example, it could be consumed further by LLMs Agents,
> given a well defined RAG or MCP tools interface, leveraging the augmented
> knowledge layer from the previous step. That if you're stuck with AI and
> LLMs "middleware" (think is better term than "browser" or "client").
> Nothing prevents from having this knowledge layer used as a service on its
> own, with the appropriate APIs.
>
> The rest, use cases and applications, boils down to whatever is possibly
> imaginable. Each tool bearer ("hammer") will use it to solve every problem
> ("nail").. Think of "what applications can be done with graph databases".
> Nearly every tool (programming language) can be used to solve any problem
> or a part of it (layer)
>
> The question is choosing the right tool for the right layer of the
> problem. At a networking level, OSI defines seven layers: Application
> (Protocol), Presentation, Session, Transport, Network, Data Link, and
> Physical layers. That clean separation allowed us to have browsers, email
> clients and the internet we know today. MVC pattern and also the Semantic
> Web itself have a layered pattern layout. Once we know the right layers may
> we came with the right tools (that's why I said "middleware").
>
> Note: I'm not discovering nothing new. I'm inspired by:
>
> ISO/HyTime (ISO/IEC 10744),
> ISO/TopicMaps (ISO/IEC 13250),
> ISO 15926
>
> Regards,
> Sebastián.
>
>
> On Sun, Oct 12, 2025, 12:49 PM Kingsley Idehen <kidehen@openlinksw.com>
> wrote:
>
>
> On 10/11/25 10:53 AM, Sebastian Samaruga wrote:
> > Another App for LLMs, REST and RDF.
> >
> > Semantic Hypermedia Addressing (SHA):
> >
> > Given Hypermedia Resources Content Types (REST):
> >
> > . Text
> > . Images
> > . Audio
> > . Video
> > . Tabular
> > . Hierarchical
> > . Graph
> > (Am I missing something?)
> >
> > Imagine the possibility of not only annotate resources of those types
> > with metadata and links (in the appropriate axes and occurrences
> > context) but having those annotations and links being generated by
> > inference and activation being that metadata and links in turn
> > meaningful annotated with their meaning given its occurrence context
> > in any given axis or relationship role (dimension).
> >
> > RESTful principles could apply rendering annotations and links as
> > resources also, with their annotations and links, making them
> > discoverable and browsable / query-able. Naming conventions for
> > standard addressable resources could make browsing and returning
> > results (for a query or prompt, for example) a machine-understandable
> > task.
> >
> > Also, the task of constructing resources hyperlinked or embedding
> > other resources in a content context (a report or dashboard, for
> > example) or the frontend for a given resource driven (REST) resource
> > contexts interactions will be a matter of discovery of the right
> > resources and link resources.
> >
> > Given the appropriate resources, link resources and addressing,
> > encoding a prompt / query for a link, in a given context (maybe
> > embedded within the prompt / query) would be a matter of resource
> > interaction, being the capabilities of what can be prompted / queried
> > for available to the client for further exploration.
> >
> > Generated resources, in their corresponding Content Types, should also
> > address and be further addressable in and by other resources, enabling
> > incremental knowledge composition by means of preserving generated
> > assets in a resources interaction contexts history.
> >
> > Examples:
> >
> > "Given this book, make an index with all the occurrences of this
> > character and also provide links to the moments of those occurrences
> > in the book's picture. Tell me which actor represented that character
> > role".
> >
> > Best regards,
> > Sebastián.
>
>
> Hi Sebastian,
>
> "Imagine the possibility of not only annotate resources of those types
> with metadata and links (in the appropriate axes and occurrences
> context) but having those annotations and links being generated by
> inference and activation being that metadata and links in turn
> meaningful annotated with their meaning given its occurrence context in
> any given axis or relationship role (dimension)."
>
> LLM-based AI Agents loosely coupled with RDF-based Knowledge Graphs
> already do that. 🙂
>
> In the latest edition of my LinkedIn newsletter [1], I dropped a post
> that explores exactly this in action. It features a demo of a personal
> assistant loosely coupled with my personal profile document—capable of
> answering questions using Knowledge Graphs automatically constructed
> from my notes.
>
> In essence, I’ve built a workflow that starts with documents that
> capture my interest and culminates in SPARQL inserts into a live
> Virtuoso instance containing a collection of note-derived Knowledge Graphs.
>
> Links:
>
> [1] From Web 2.0 to the Agentic Web: The Shift from Eyeballs to AI Agent
> Presence --
>
> https://www.linkedin.com/pulse/from-web-20-agentic-shift-eyeballs-ai-agent-presence-idehen-u9fne/
>
> [2] The File Create, Save, and Share Paradigm (Revisited) --
>
> https://www.linkedin.com/pulse/file-create-save-share-paradigm-revisited-kingsley-uyi-idehen-phxze
>
>
> --
> Regards,
>
> Kingsley Idehen
> Founder & CEO
> OpenLink Software
> Home Page: http://www.openlinksw.com
> Community Support: https://community.openlinksw.com
>
> Social Media:
> LinkedIn: http://www.linkedin.com/in/kidehen
> Twitter : https://twitter.com/kidehen
>
>
>

Received on Sunday, 12 October 2025 21:31:37 UTC