- From: Jason Mayes via GitHub <noreply@w3.org>
- Date: Fri, 13 Feb 2026 18:44:26 +0000
- To: public-webai@w3.org
I personally believe that any action that could have consequence of significance be that financial or medical for example should always have a human in the loop (at least at time of writing when AGI does not exist). For example I would be happy for an Agent to browse Amazon for me to find the best products that fit some criteria and add them to cart, but I would then want to be async informed when this task was complete so I can then skim over the chosen items and then pay myself. I also think that depending on the task there may be some situations where you may not mind the out come so long as criteria are met. For example "find me some black socks under 10 USD and over 4* reviews" - and I really do not care what it then choses. If criteria can be met and guaranteed for the price and review score then I am happy to be out of the loop as the cost is insignificant to me vs my time to do checkout manually. Everyone will have different preference for risk vs time saved so this should be customizable to some degree. I quite like how Google Antigravity does this - at startup you set some preferences as to how much check-in the agent must do, and then on a case by case basis eg it wants to run some command line binary I choose to allow or allow and remember for future so it builds up a reputation with me in a more granular way. -- GitHub Notification of comment by jasonmayes Please view or discuss this issue at https://github.com/w3c/webai/issues/7#issuecomment-3898771515 using your GitHub account -- Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config
Received on Friday, 13 February 2026 18:44:27 UTC