Re: Semantic Hypermedia Addressing

On 10/11/25 10:53 AM, Sebastian Samaruga wrote:
> Another App for LLMs, REST and RDF.
>
> Semantic Hypermedia Addressing (SHA):
>
> Given Hypermedia Resources Content Types (REST):
>
> . Text
> . Images
> . Audio
> . Video
> . Tabular
> . Hierarchical
> . Graph
> (Am I missing something?)
>
> Imagine the possibility of not only annotate resources of those types 
> with metadata and links (in the appropriate axes and occurrences 
> context) but having those annotations and links being generated by 
> inference and activation being that metadata and links in turn 
> meaningful annotated with their meaning given its occurrence context 
> in any given axis or relationship role (dimension).
>
> RESTful principles could apply rendering annotations and links as 
> resources also, with their annotations and links, making them 
> discoverable and browsable / query-able. Naming conventions for 
> standard addressable resources could make browsing and returning 
> results (for a query or prompt, for example) a machine-understandable 
> task.
>
> Also, the task of constructing resources hyperlinked or embedding 
> other resources in a content context (a report or dashboard, for 
> example) or the frontend for a given resource driven (REST) resource 
> contexts interactions will be a matter of discovery of the right 
> resources and link resources.
>
> Given the appropriate resources, link resources and addressing, 
> encoding a prompt / query for a link, in a given context (maybe 
> embedded within the prompt / query) would be a matter of resource 
> interaction, being the capabilities of what can be prompted / queried 
> for available to the client for further exploration.
>
> Generated resources, in their corresponding Content Types, should also 
> address and be further addressable in and by other resources, enabling 
> incremental knowledge composition by means of preserving generated 
> assets in a resources interaction contexts history.
>
> Examples:
>
> "Given this book, make an index with all the occurrences of this 
> character and also provide links to the moments of those occurrences 
> in the book's picture. Tell me which actor represented that character 
> role".
>
> Best regards,
> Sebastián.


Hi Sebastian,

"Imagine the possibility of not only annotate resources of those types 
with metadata and links (in the appropriate axes and occurrences 
context) but having those annotations and links being generated by 
inference and activation being that metadata and links in turn 
meaningful annotated with their meaning given its occurrence context in 
any given axis or relationship role (dimension)."

LLM-based AI Agents loosely coupled with RDF-based Knowledge Graphs 
already do that. 🙂

In the latest edition of my LinkedIn newsletter [1], I dropped a post 
that explores exactly this in action. It features a demo of a personal 
assistant loosely coupled with my personal profile document—capable of 
answering questions using Knowledge Graphs automatically constructed 
from my notes.

In essence, I’ve built a workflow that starts with documents that 
capture my interest and culminates in SPARQL inserts into a live 
Virtuoso instance containing a collection of note-derived Knowledge Graphs.

Links:

[1] From Web 2.0 to the Agentic Web: The Shift from Eyeballs to AI Agent 
Presence -- 
https://www.linkedin.com/pulse/from-web-20-agentic-shift-eyeballs-ai-agent-presence-idehen-u9fne/

[2] The File Create, Save, and Share Paradigm (Revisited) -- 
https://www.linkedin.com/pulse/file-create-save-share-paradigm-revisited-kingsley-uyi-idehen-phxze


-- 
Regards,

Kingsley Idehen 
Founder & CEO
OpenLink Software
Home Page: http://www.openlinksw.com
Community Support: https://community.openlinksw.com

Social Media:
LinkedIn: http://www.linkedin.com/in/kidehen
Twitter : https://twitter.com/kidehen

Received on Sunday, 12 October 2025 15:49:09 UTC