Re: Machine-Readable Records

Aristotle’s guidelines for effective rhetoric can be summarised as: ethos (establishing credibility), pathos (appealing to emotions), logos (appealing to logic) and kairos (timeliness/topicality).

So you may find an argument more or less persuasive depending on how credible you consider the person to be who is making the argument.  Over time we can expect AI systems to become better and better at engaging in making persuasive arguments.  A promising direction uses a collection of AI agents that take on roles of proposing or critiquing ideas, including fact checking.

The effectiveness of arguments is in relation to human thought and can be subjective. I may be unconvinced by an argument that you find quite compelling.  AI systems will therefore seek to understand you in order to tailor a line of argumentation that you will find convincing.

Concepts are often fuzzy, e.g. whether someone is short or tall, and moreover context sensitive since the evaluation depends on the people you are comparing that person to.  Similar considerations apply to concepts like subjectivity, uncertainty, incompleteness, trust, and inconsistency.  Logic refers to an impoverished view of the world that disregards the complexity of everyday knowledge. 

The world is changing and we can expect a dramatic change in database technology as IT catches up with the post-logic world.

> On 5 Jul 2024, at 22:27, Owen Ambur <owen.ambur@verizon.net> wrote:
> 
> Hey, Dave, your comments prompted me to engage ChatGPT to learn more about the concepts you've raised here.  
> 
> They remind me of the deliberations of the Credible Web Community Group, some of whose members have hoped that reputation might provide a reasonable basis for determining the credibility of information.
> 
> Personally, I don't find such arguments to be persuasive.  It seems to me that all that truly matters is the reliability of the records themselves.  Given records that can be objectively evaluated to determine that they are what they purport to be, I prefer to apply my own judgment to assess their value in terms of helping me achieve my objectives.
> 
> On the other hand, over time, I expect that AI applications will help us come to such understandings far more efficiently and effectively, by learning what is actually required to achieve human objectives.  In the meantime, it seems to me that we can help such augmented intelligence applications get better, faster in helping us by thinking more clearly about and documenting our values and objectives in an open, standard, machine-readable format, like StratML.
> 
> My dialogue with ChatGTP led it to this conclusion:
> 
> ... it's important to remain aware that no evaluation can be entirely free from subjectivity. Critical thinking and ongoing reassessment of sources and information are essential to maintaining a reliable and objective evaluation process.
> 
> When I asked about the implications with respect to politics, it "choked" or, at least, it failed to respond.  Or maybe it just wants me to pay find out what it "thinks" about that.
> 
> Looking forward to learning why I might be wrong and how I might be able to improve my thinking along these lines.
> 
> Owen Ambur
> https://www.linkedin.com/in/owenambur/
> 
> 
> On Friday, July 5, 2024 at 03:17:54 AM EDT, Dave Raggett <dsr@w3.org> wrote:
> 
> 
> Just to note that machine readable formats now includes natural language, images, video and sound. AI is good at handling imperfect knowledge, e.g. imprecise and context sensitive information. However, most knowledge is imperfect. In everyday life, argumentation is commonplace with logic relegated to areas where formal models are a good enough approximation, and moreover can provide valuable precision.
> 
> The carbon footprint for AI is clearly a problem, and may limit how much it is applied.  As neuromorphic technologies improve, the energy demands will dwindle, but this may take many years to come to fruition. I wonder if techniques to derive symbolic knowledge from LLMs can provide a shorter term solution, provided we use approaches that target imperfect knowledge, such as the Plausible Knowledge Notation(PKN), where the energy demands should be much better than for LLMs themselves.
> 
>> On 4 Jul 2024, at 18:14, Owen Ambur <owen.ambur@verizon.net> wrote:
>> 
>> Theoretically, yes, Paola, not only could more mature query services enable more precise discovery of machine-readable information but also more effective analysis and usage of it.
>> 
>> Practically speaking, however, the incumbent search engines are not doing that and the AI/ML services are focusing (and spending unfathomable amounts of money and energy) on making sense of less mature, unstructured information.  I view it as a case of artificial ignorance <https://www.linkedin.com/pulse/artificial-ignorance-owen-ambur/>.
>> 
>> Your message prompted me to engage Claude.ai along this train of thought, to which it concluded:
>> 
>> This situation raises important questions about the discoverability of structured data formats like XML on the web. It might indicate a need for better practices or standards in how such data is made available and indexed by search engines.
>> 
>> Amen!  Indeed, ChatGPT concludes <https://chatgpt.com/share/214b956a-c402-42ee-8843-708d3501997e>:
>> 
>> By leveraging ... XML structures, search engines can more effectively crawl, index, and present web content, ultimately improving search relevance and user experience.
>> 
>> With respect to strategic plans and performance reports, as well as website about us statements, I'm working toward that end at https://search.aboutthem.info/
>> 
>> Note also that U.S. federal agencies have been directed, by law <https://www.linkedin.com/pulse/open-gov-data-act-machine-readable-records-owen-ambur/>, to create and manage their records in machine-readable format.  So the degree to which they begin doing so will be a key indicator of their accountability and trustworthiness.  Moreover, since publishing information in open, standard, machine-readable format is a generic best practice, the same is true of agencies at all levels of government, worldwide.
>> 
>> So this is much bigger than just a techie issue.  It is key to the effectiveness and accountability of governments -- including allies and partners -- in the entire "free world".
>> 
>> Incidentally, during less contentious times, China was among the five nations that initially agreed to work on the StratML standard in 2013 <https://stratml.us/history.htm#2013>.
>> 
>> See also these StratML use cases:
>> 
>> Goal 14: Partnerships & Multi-Organization Groups <https://stratml.us/carmel/iso/UC4SwStyle.xml#_0fc1e310-08a5-11e6-b06f-a2fa45c7ae33> ~ Use the Relationship elements to cross-reference common and complementary objectives in the plans of each member of a partnership, consortium, or other informal group.
>> 
>> Goal 26: Conflict Resolution Services <https://stratml.us/carmel/iso/UC4SwStyle.xml#_203da842-d612-11e6-bf2d-1dd10ebcdb3b> ~ Document the personal values as well as the longer-term goals and near-term objectives of individuals and organizations in conflict.
>>  
>> Goal 27: E-Diplomacy & International Development <https://stratml.us/carmel/iso/UC4SwStyle.xml#_de8587c0-46e2-11e7-9757-2508ff1fc704> ~ Publish plans in StratML format to support establishment of innovative tools for diplomacy and international development.
>> 
>> As the sponsor of the XML and XSD standards, it would be nice to think the W3C might be up for the challenge of more enlightened leadership in their application and usage.  If not, I trust that entrepreneurs will eventually capitalize on that opportunity and creatively destroy the incumbent powers-that-be.
>> 
>> In the meantime, an unbelievable amount of money and energy is being wasted on outmoded practices, which of course serves the interests of those profiting from such inefficiencies.
>> 
>> Owen Ambur
>> https://www.linkedin.com/in/owenambur/
>> 
>> 
>> On Thursday, July 4, 2024 at 02:49:19 AM EDT, Paola Di Maio <paoladimaio10@gmail.com> wrote:
>> 
>> 
>> Congrats Owen
>> for publishing something on the web that machine can find and use
>> Is it because machine simply looks for the machine readable info?
>> 
>> On Thu, Jul 4, 2024 at 2:12 AM Owen Ambur <owen.ambur@verizon.net <mailto:owen.ambur@verizon.net>> wrote:
>> When first I asked, ChatGPT disclaimed having any developers much less a plan.  However, upon prompting, it disclosed the plan outlined in StratML format at https://stratml.us/docs/CGPT.xml
>> 
>> Likewise, Claude.ai was a bit skittish about divulging its objectives but also disgorged some upon prompting, at https://stratml.us/docs/CLD.xml
>> 
>> From my perspective, a good explanation would report in open, standard, machine-readable format reliable metrics by which human beings can readily comprehend how well the avowed objectives are being served.
>> 
>> I'll look forward to learning what other alternative there might be.
>> 
>> Owen Ambur
>> https://www.linkedin.com/in/owenambur/
>> 
>> 
>> On Tuesday, June 11, 2024 at 05:24:00 AM EDT, Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>> wrote:
>> 
>> 
>> First my thanks to Paola for this CG. I’m hoping we can attract more people with direct experience. Getting the CG noticed more widely is quite a challenge! Any suggestions?
>> 
>> 
>>> It has been proposed that without knowledge representation. there cannot be AI explainability 
>> 
>> 
>> That sounds somewhat circular as it presumes a shared understanding of what “AI explainability” is.  Humans can explain themselves in ways that are satisfactory to other humans.  We’re now seeing a similar effort to enable LLMs to explain themselves, despite having inscrutable internal representations as is also true for the human brain.
>> 
>> I would therefore suggest that for explainability, knowledge representation is more about the models used in the explanations rather than in the internals of an AI system. Given that, we can discuss what kinds of explanations are effective to a given audience, and what concepts are needed for this.
>> 
>> Explanations further relate to how to making an effective argument that convinces people to change their minds.  This also relates to the history of work on rhetoric, as well as to advertising and marketing!
>> 
>> Best regards,
>> 
>> Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>>
>> 
>> 
>> 
> 
> Dave Raggett <dsr@w3.org>
> 
> 
> 

Dave Raggett <dsr@w3.org>

Received on Saturday, 6 July 2024 10:02:04 UTC