Re: [voiceinteraction] minutes April 10, 2024

Thanks very much for these very detailed minutes !

Best,            Gérard
_____________________________________________________________________ 
Gérard CHOLLET,PhD,DR,CNRS-SAMOVAR,gerard.chollet@TELECOM-SudParis.eu 
Institut Polytechnique de Paris, IMT-TSP, 9 rue Charles Fourier, 91011 Evry 
Tel: +33 1 75319628 , http://perso.telecom-paristech.fr/~chollet/ 
_____________________________________________________________________

----- Mail original -----
De: "Deborah Dahl" <Dahl@conversational-Technologies.com>
À: "public-voiceinteraction" <public-voiceinteraction@w3.org>
Cc: "SANSEN Hugues" <hugues.sansen@telecom-sudparis.eu>
Envoyé: Mercredi 10 Avril 2024 18:57:05
Objet: [voiceinteraction] minutes April 10, 2024

Today's call featured a guest presentation from Hugues Sansen (a colleague of Gerard Chollet) from Telecom Sudparis about his work
on an AI companion that essentially records your life, as I understand it. He discussed some interesting problems concerning privacy
issues around the information recorded by this application. Hugues and Gerard provided some links with additional information, which
I've included in the minutes. Our next steps will be to review this information and then decide how we can make use of it in the
Voice Interaction Community Group. 
Please let me know if there's anything missing or incorrect in the minutes.
There was a question about joining the mailing list -- this can be done by following the "Join or leave this group" link on our home
page, https://www.w3.org/community/voiceinteraction/.

The formatted minutes can be found at https://www.w3.org/2024/04/10-voiceinteraction-minutes.html, and are listed as text below.

   [1]W3C

      [1] https://www.w3.org/

                             - DRAFT -
                           Voice Interaction

10 April 2024

   [2]IRC log.

      [2] https://www.w3.org/2024/04/10-voiceinteraction-irc

Attendees

   Present
          debbie, dirk, gerard, hugues

   Regrets
          Dirk

   Chair
          debbie

   Scribe
          ddahl

Contents

Meeting minutes

   debbie: explains interoperability

   hugues: this is dangerous because your personal assistant knows
   a lot of information that is private
   . I don't know how to teach a machine the different layers of
   privacy

   gerard: need to anonymize the question

   hugues: I love the idea

   hugues: I am developing two things. I am developing a simple,
   rule-based dialog system in the interests of getting something
   to work rapidly
   . when I get the answer from the person I will analyze it in an
   embedded AI
   . from that I build a Knowledge Graph of the person, then I can
   query the KG about the person to build the life of the person
   to be able to write my bio
   . you can tell things to the ghost writer that you don't want
   to be disclosed to certain people. The human ghost writer can
   understand that but I don't know how to tell that to the
   machine.
   . if I give my companion my credit card PIN, it can buy
   something, but I don't want anyone else to get my PIN
   . people are not supposed to access my companion without
   authorization, or understand different levels of privacy
   . you can't be sure that people will respect different levels
   of confidence

   gerard: we have to trust our major-dome "personal assistant"
   . like Knowledge Navigator

   hugues: how to define rules to keep information secure
   . the KG can mark information as private by adding links to
   each vertex to indicate level of privacy
   . from 1-10
   . the issue is in a dialog not everything is private, and it's
   hard to say "this is private"
   . the issue is that you might develop too much confidence in
   the companion
   . this happens with humans, you might say things that you
   forget are private.

   debbie: link?

   hugues: no, but arxiv paper to be presented next week

   gerard: presented this work at Conversational Interaction two
   years ago with Speech Morphing

   hugues: what is new is that now we are taking advantage of
   embedded AI
   . now we are asking for weather, Wikipedia information
   . my own data are not public
   . I've been thinking about this kind of interaction for a long
   time
   . there is semantics attached to privacy, for example, I know
   some things in the defense industry, but they don't want to
   talk about at home

   [3]https://hal.science/hal-04378982/document

      [3] https://hal.science/hal-04378982/document

   hugues: this is a very complex subject
   . for example, in the defense industry, they compartementalize
   information
   . this is a recurring question

   [4]https://dumas.ccsd.cnrs.fr/TELECOM-SUDPARIS/hal-04392089v1

      [4] https://dumas.ccsd.cnrs.fr/TELECOM-SUDPARIS/hal-04392089v1

   debbie: two problems how to designate private information in
   the KG, and how to tell companion that some information is
   private

   hugues: should companion ask the user if information is private

   debbie: sometimes you want the companion to buy things for you
   . for example, Alexa can buy things for you
   . could you just ask the user to tell the companion to keep
   certain information private?

   hugues: yes, within a session
   . there is a big problem with the definition of privacy

   debbie: a lot of people want to tell you about AI and privacy,
   but I don't know if they have a clear idea

   hugues: you can keep all information in a closed box

   hugues: the simple fact that your companion connects to
   something means something
   . I'm now working on a closed system, that doesn't connect to
   the cloud, everything is kept inside the box
   . mostly I query the system myself
   . from that information I can build a biography of the user,
   when the system writes a biography of the person certain
   information needs to be kept out
   . some information can be provided to my wife, children

   dirk: do you label the information?

   hugues: the different kinds of information are recorded in the
   KG
   . in a graph you have many roads to a certain point, you might
   get to the information from an unprotected path
   . the user has to say "this information has to be protected"
   . I used to work for the defense industry, we knew that
   everything in a room has to be confidential, and it's still
   confidential after we leave the room
   . how can a machine understand that?

   [5]https://hal.science/hal-00611090

      [5] https://hal.science/hal-00611090

   dirk: also have to consider trusted environment

   hugues: in a trusted system we can make rooms so that the
   system cannot disclose certain things

   hugues: say I want to buy trousers, the seller will recommend
   trousers based on my skin color, eye color, information that I
   would like to keep confidential

   dirk: you can derive confidential information from other
   information that you already know
   . privacy is a lot more than on/off
   . very interesting work

   [6]https://hal.science/hal-04181551

      [6] https://hal.science/hal-04181551

   [7]https://www.conversationalinteraction.com/_files/ugd/
   dbc594_69b6e727f5c64020afdc801291a133fc.pdf

      [7] https://www.conversationalinteraction.com/_files/ugd/dbc594_69b6e727f5c64020afdc801291a133fc.pdf

   debbie: we should take a few weeks to take a look at these
   papers and invite hugues back

   dirk: how can we make use of this work?

   hugues: it is not because we don't have the answers that we
   shouldn't do the work
   . it is a matter of semantics

   debbie: can send hugues the link to subscribe

   hugues: this is the most important subject in AI


    Minutes manually created (not a transcript), formatted by
    [8]scribe.perl version 221 (Fri Jul 21 14:01:30 2023 UTC).

      [8] https://w3c.github.io/scribe2/scribedoc.html

Received on Wednesday, 10 April 2024 17:08:34 UTC