Re: Meronymity

Indeed, we have the difference between a medical device and a digital
medical textbook. Both might be enhancements to what a physician does and
is responsible for. The medical device is FDA regulated as a device through
a corporate manufacturer and more or less proprietary. The textbook is not
regulated beyond the informal reputation of its authors and publishers but
the physician takes responsibility for its output because it's relatively
open.

If the digital medical textbook looks like AI, it will be considered
personal AI to the extent it's transparent to the physician. The
personal AI may then evolve on the basis of the interaction with the
physician and their patients. There will be no obvious way to certify that
personal AI through the FDA or some web of trust.

Now, physician A could choose to practice based on a proprietary and opaque
AI because it's certified. They would then be training the AI through their
experience just as they would an open, non-proprietary AI used by
physician B. As a patient, do you pick A over B because some part of its
history is certified?

Adrian

On Sun, Apr 28, 2024 at 8:47 PM Alan Karp <alanhkarp@gmail.com> wrote:

> Do we ask for proof of humanity of other software running on behalf of a
> person?  What if a personal AI carries out its task using an application?
> Isn't the human who determines what the software, AI or otherwise, supposed
> to do the responsible party?
>
> --------------
> Alan Karp
>
>
> On Sun, Apr 28, 2024 at 5:23 PM Adrian Gropper <agropper@healthurl.com>
> wrote:
>
>> At the last IIW, we had a session titled: Should Proof of Humanity apply
>> to my Personal AI?
>> https://docs.google.com/document/d/1qzqLotWLt_5W8leQIRKzPxkm2wV2xWXdwEzc6Iiks_Y/edit
>>
>> The group could not think of a single reason to make a distinction
>> between me and an AI that I control as my delegate. To introduce such a
>> "CAPTCHA on steroids" is to limit technological enhancement to corporations
>> and "others". Will we treat personal technological enhancement the way we
>> treat doping in sports?
>>
>> Who would benefit from imposing such a restriction on
>> technological enhancement?
>>
>> How would we interpret the human right of Freedom of Association and
>> Assembly (Article 20) to exclude open source communities creating open
>> source personal AI that an individual can take responsibility for?
>>
>> Certifying the vendor, provenance, and training data of a personal AI
>> seems like the last thing we would want to do. I hope what Drummond is
>> suggesting applies to AI that is not transparent and controlled by an
>> individual or a community of individuals in a transparent way. How do we
>> see a world where two kinds of AI, personal and "certified" interact?
>>
>> Adrian
>>
>> On Sun, Apr 28, 2024 at 7:16 PM Drummond Reed <
>> Drummond.Reed@gendigital.com> wrote:
>>
>>> Manu has a good point. I have no problem interacting with an AI bot as
>>> long as I can be sure it’s an AI bot—and ideally if I can check its vendor,
>>> provenance, trained data sets, etc.
>>>
>>>
>>>
>>> Same for a human—I don’t need to know their identity, just their
>>> authenticity.
>>>
>>>
>>>
>>> Meronimous (is that a word? 😉) VCs could definitely help do that.
>>>
>>>
>>>
>>> =Drummond
>>>
>>>
>>>
>>> *From: *Manu Sporny <msporny@digitalbazaar.com>
>>> *Date: *Friday, April 26, 2024 at 8:05 AM
>>> *To: *W3C Credentials CG (Public List) <public-credentials@w3.org>
>>> *Cc: *Adrian Gropper <agropper@healthurl.com>, Golda Velez <
>>> gvelez17@gmail.com>
>>> *Subject: *Re: Meronymity
>>>
>>> On Wed, Apr 24, 2024 at 11:39 AM Golda Velez <gvelez17@gmail.com> wrote:
>>> > um.  not to be a downer, but Quora on the non-technical side is
>>> absolutely full of manipulation by bad actors, imho...
>>>
>>> Wouldn't the use of VCs provide some level of assurance that the bad
>>> actor isn't faking their credentials? As Adrian mentioned, there can
>>> be reputation associated with those VCs.
>>>
>>> > the thing is its important to remember there are quite different
>>> incentives and less verifiability once you are in a non-technical domain,
>>> especially geopolitical
>>>
>>> Are you saying that there are no combination of verifiable credentials
>>> that could be used to separate the bots, from the bad actors, from the
>>> good actors with specific expertise?
>>>
>>> Another interesting aspect here is that "the bots" are, probably
>>> within the next decade, going to legitimately exceed the level of
>>> expertise of 99.9% of the population on most subjects that could be
>>> discussed in an online forum. I, for one, welcome our new robot troll
>>> overlords. :P
>>>
>>> -- manu
>>>
>>> --
>>> Manu Sporny -
>>> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fin%2Fmanusporny%2F&data=05%7C02%7Cdrummond.reed%40avast.com%7C8b1a80a65c1e47a5731d08dc660249cc%7C94986b1d466f4fc0ab4b5c725603deab%7C0%7C0%7C638497407228003676%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=WOhFEB730uWzqudZX3VWXM2nbZlgLINOV5ABU%2F3rU5M%3D&reserved=0
>>> <https://www.linkedin.com/in/manusporny/>
>>> Founder/CEO - Digital Bazaar, Inc.
>>>
>>> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.digitalbazaar.com%2F&data=05%7C02%7Cdrummond.reed%40avast.com%7C8b1a80a65c1e47a5731d08dc660249cc%7C94986b1d466f4fc0ab4b5c725603deab%7C0%7C0%7C638497407228011352%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=Z0o0VRMgqzyQjvkp8nsi%2Bn0AIvlNFL555g%2Fau8At1fY%3D&reserved=0
>>> <https://www.digitalbazaar.com/>
>>>
>>

Received on Monday, 29 April 2024 01:41:09 UTC