Re: [securityig] Agenda: 2025-11-14 (physically at TPAC) (#30)

the following statement from their doc gets to the core of the problem that i posted a while ago.  Generative AI makes stuff up and that seems to be intentionally unfixable.  What is needed is a separate evaluation layer to test the intent against the results.  Otherwise the agent could start sending itself goods from Amazon with my credit card.  And knowing Amazon - i would be stuck with the bill, not Amazon and not the guy supplying the agent.

I can't imagine any user accepting this condition.
I can't imagine that i would accept this agent, but i can imagine someone in my family doing it and charging my credit card.

quote =
There is no guarantee that the WebMCP tool’s declared intent matches its actual behavior.


-- 
GitHub Notification of comment by TomCJones
Please view or discuss this issue at https://github.com/w3c/securityig/issues/30#issuecomment-3488004469 using your GitHub account


-- 
Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config

Received on Tuesday, 4 November 2025 21:03:15 UTC