Re: Scientific Models and Semantics

Hi Adam,

A name may have different meanings in different contexts. One way to approach that is to use different named graphs for different contexts.  The Cognitive AI CG is looking at this from the perspective of “chunks”, a term borrowed from psychology, and in common use in the cognitive sciences. A chunk is essentially a concept with a set of properties whose values name other chunks. Contexts are handled by adding a context property, which references a chunk that itself can reference other chunks to form context chains.

Context chains are widely applicable, e.g. to describing stories, for which some “facts” are specific to the story, whilst others are generally true. To describe the beliefs of individual people within a story you create a fresh context for each person that links to the story context. Contexts are also applicable when  you want to imagine some situation, e.g. when you want to evaluate a plan by imagining how things would work out. The things you imagine happening are only true in that particular context.

p.s. the chunk serialisation format is an amalgam of RDF and Labelled Property Graphs (LPG).

See: https://www.w3.org/community/cogai/ <https://www.w3.org/community/cogai/>

> On 10 Feb 2020, at 04:59, Adam Sobieski <adamsobieski@hotmail.com> wrote:
> 
> Semantic Web Interest Group,
>  
> I would like to broach, for discussion, scientific models and semantics, semantics in multi-model scenarios.
>  
> There exist multiple models of atoms: the Dalton model, the Thomson model, the Lewis model, the Nagaoka model, the Rutherford model, the Bohr model, the Bohr–Sommerfeld model, the Gryziński model, the Schrodinger model, and the Dirac-Gordon model.
>  
> It seems that scientific models can contain components which are symbols. It seems that language symbols, e.g. “electron” can be related to these model component symbols. It seems that these model component symbols can be related to abstract concepts, e.g. the electron. Perhaps, while the aforementioned models attempt to describe the same things, the overarching, more abstract, set of concepts which includes those described things, the proton, the neutron and the electron, is itself, an abstract model.
>  
> We can visualize a diagram, a graph, with a lexical symbol, “electron”, on the left side, which is related, by arrows pointing to the right, to a number of model component symbols (Dalton_electron, Thomson_electron, …). Each model component symbol is related to its containing model (Dalton_model, Thomson_model, …). Then, as the set of models under discussion attempt to describe the same things, each model component symbol can be related to the same abstract concept, e.g. the electron (abstract_electron), on the right side of the graph, which can be from an abstract model (abstract_model). Furthermore, each model can be related to that abstract model. The lexeme “electron”, from the left side of the visualized diagram, can also be related to the abstract concept, the electron, from the right side of the diagram, as it is another possible sense of the meaning of the lexeme.
>  
> As an ideal natural language parser processed and interpreted the contents of a physics textbook, it might find that the lexeme “electron” meant different things in different chapters as the textbook’s authors brought the audience on a journey through a number of models. The matter might become more pronounced as an ideal natural language parser or interpreter processed a set of physics textbooks, from kindergarten through university graduate level physics, and attempted to merge the contents together into a knowledgebase.
>  
> I wonder what others in this mailing list might think about these topics (models and semantics, semantics in multi-model scenarios) and whether there might be any publications on these topics to recommend?
>  
>  
> Best regards,
> Adam Sobieski

Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett
W3C Data Activity Lead & W3C champion for the Web of things 

Received on Monday, 10 February 2020 09:44:03 UTC