- From: Janina Sajka <janina@rednote.net>
- Date: Wed, 19 May 2021 15:05:26 -0400
- To: "Joshue O'Connor" <joconnor@w3.org>
- Cc: public-rqtf@w3.org
Yeah, so looking a bit further into conveying emotion ... If one googles for: artificial emotional intelligence One gets some very interesting results. Not sure how much has made into deployable libraries yet, but there's definitely such a field of study, and it's implications are likely very relevant across disability as well as cross-cultural application. I'm reading a book by one scientist in the field that I pulled from NLS in the last month. Of course this also further illustrates how different our opportunities are with canned vs real-time media. Joshue O Connor writes: > Thanks for the input Janina, and yes, if we are to continue/progress with > user need, we need to be clear around what represents quality vs an 'out of > the box ersatz avatar'. > > I also wish we didn't have to use that term avatar, the budding Sanskritian > in me objects! > > Josh > > > Janina Sajka <mailto:janina@rednote.net> > > Wednesday 19 May 2021 16:17 > > I thought I should say for the record what I raised my hand to say when > > we ran out of time ... > > > > Regarding avatars ... > > > > I was highly impressed by the detailes that emerged about why avatars > > tend to fail SL users. I'm thinking we should capture a high level > > description of what would be required to create a successful avatar in > > the XAUR by way of answering any engineering interest in moving to their > > use prematurely. > > > > I believe the explanation is that SL captures far more than the words > > which are captured in a text transcript of what's being said. SL > > attempts to communicate more of the conversation than just the verbal > > language content we've learned to capture with paper and ink. > > > > Facial expression -- there are some 43 muscles that control facial > > expression, though if one googles this question the answers vary, 43, > > 42, 33 ... > > > > Implication: anyone building a signing avatar should provide a face and > > 43 functioning muscular variables. > > > > Similarly, there's the challenge to understand the nuance of vocal > > expression. Consider the word "O:" > > > > O (as in startled surprise) > > O? (as in really?) > > O (as in oops, which sometimes comes out as "o, o) > > > > There are more for just this one word, but I believe I've made my point. > > > > If we take this tack we avoid a perscription against engineering > > development and supplant it with the far more meaningful challenge of > > what it takes to design a satisfying avatar. > > > > Thoughts? > > > > Janina > > > > > > > -- > Emerging Web Technology Specialist/Accessibility (WAI/W3C) -- Janina Sajka https://linkedin.com/in/jsajka Linux Foundation Fellow Executive Chair, Accessibility Workgroup: http://a11y.org The World Wide Web Consortium (W3C), Web Accessibility Initiative (WAI) Co-Chair, Accessible Platform Architectures http://www.w3.org/wai/apa
Received on Wednesday, 19 May 2021 19:05:41 UTC