RE: Call for Implementations of EmotionML published

Patrick,

Certainly.  I will contact Communications of ACM to get details on contributing articles/papers.

We earlier agreed to target broadly circulated publications.

Edmon

 -----Original Message-----
From:   Patrick Gebhard [mailto:patrick.gebhard@dfki.de]
Sent:   Friday, September 28, 2012 09:41 AM Eastern Standard Time
To:     Begoli, Edmon
Cc:     Felix.Burkhardt@telekom.de; R.Cowie@qub.ac.uk; christian@becker-asano.de; tim.llewellynn@nviso.ch; schuller@tum.de; kazemzad@usc.edu; www-multimodal@w3.org; Liguori, Michael
Subject:        Re: Call for Implementations of EmotionML published

Hello Edmon, hello all,

I want to contribute too, if this is still feasible? I'll finish an EmotionML version of my affect simulation model (ALMA) by next week. What do you think?

Best regards,
Patrick

--
Dr. Patrick Gebhard, Senior Researcher, patrick.gebhard@dfki.de
http://www.dfki.de/~gebhard, Phone: +49681 3023191, Fax: +49681 3025341
--
Official DFKI coordinates:
Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany
Geschaeftsfuehrung:
Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
Dr. Walter Olthoff
Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes
Amtsgericht Kaiserslautern, HRB 2313

Am 21.09.2012 um 18:31 schrieb "Liguori, Michael" <mvliguori@whataremindsfor.com>:


        Hello,

        I'm new to the group and would like to contribute to the multimodal group.

        I'm not sure if you are worried about change history on google docs.  I
        heard of writters/authors actually starting to use github for handling
        version control in their books.  I haven't used it personally for document,
        just with code.

        I could spend sometime finding out if that is a possible solution for you,
        if you are worried about change control history.

        --
        Best Regards,

        Michael V. Liguori
        President - Founder
        What Are Minds For, Inc.
        201 280-1677
        mvliguori@whataremindsfor.com
        http://whataremindsfor.com


        On Thu, Sep 20, 2012 at 10:43 AM, Begoli, Edmon <begolie@ornl.gov> wrote:



                If we all, or at least, few of us agree to co-author paper, I could start
                a draft.

                I am thinking of a journal paper for ACM Computer or similar media.

                Does anyone have suggestion to how to collaborate online on a paper?

                I use Google Docs but I am open to any suggestion or a collaboration tool
                - specially if available through W3C.

                Regards,
                Edmon
                ________________________________________
                From: Felix.Burkhardt@telekom.de [Felix.Burkhardt@telekom.de]
                Sent: Thursday, September 20, 2012 10:34 AM
                To: Begoli, Edmon; R.Cowie@qub.ac.uk
                Cc: christian@becker-asano.de; patrick.gebhard@dfki.de;
                tim.llewellynn@nviso.ch; schuller@tum.de; kazemzad@usc.edu;
                www-multimodal@w3.org
                Subject: AW: Call for Implementations of EmotionML published

                Thanks Edmon
                I'm not aware that the W3C somewhere collects technical reports for the
                implementing systems of its standards, this question would probably go best
                to Kazuyuki.
                Didn't you start am initiative for a common paper on our EmotionML
                implementations a while ago?
                I think it's a good idea.
                I'll have some other business to attend next week and then a week vacation
                and will start to analyze the implementation reports from the second week
                in October on.
                Cheers,
                Felix

                -----Ursprüngliche Nachricht-----
                Von: Begoli, Edmon [mailto:begolie@ornl.gov]
                Gesendet: Donnerstag, 20. September 2012 16:25
                An: Burkhardt, Felix; R.Cowie@qub.ac.uk
                Cc: christian@becker-asano.de; patrick.gebhard@dfki.de;
                tim.llewellynn@nviso.ch; schuller@tum.de; kazemzad@usc.edu;
                www-multimodal@w3.org
                Betreff: RE: Call for Implementations of EmotionML published

                Felix et al,

                Implementation report for EMLPy library that generates EmotionML compliant
                documents is included.

                Is there an interest in having a technical report available for the site?

                Thank you,
                Edmon
                ________________________________________
                From: Felix.Burkhardt@telekom.de [Felix.Burkhardt@telekom.de]
                Sent: Monday, September 17, 2012 5:58 AM
                To: R.Cowie@qub.ac.uk
                Cc: christian@becker-asano.de; patrick.gebhard@dfki.de;
                tim.llewellynn@nviso.ch; Begoli, Edmon; schuller@tum.de; kazemzad@usc.edu;
                www-multimodal@w3.org
                Subject: AW: Call for Implementations of EmotionML published

                Thanks Roddy,
                That's good news.
                Now the next step, as described in [1],
                would be to deliver the report in form of an XML file with the format:

                <system-report name="YOUR-SYSTEM-NAME-HERE">

                <testimonial>YOUR-WELL-FORMED-TESTIMOMIAL-CONTENT-HERE</testimonial>
                       <assert id="100"
                res="pass|fail|not-impl">OPTIONAL-NOTES-HERE</assert>
                </system-report>

                What the assert-ids mean is described in [2], e.g.

                EXAMPLE
                <assert id="221" res="pass "> </assert>
                means, in your implementation all dimension elements contain a "name"
                attribute

                Or, another example
                <assert id="118" res="not-impl "> </assert> means, your implementation
                ignores appraisal-sets (and elements)

                I attach as sample my report, from the Speechalyzer, if that helps.
                If done, this file should be submitted to

                www-multimodal@w3.org

                Greetings,
                Felix

                [1] http://www.w3.org/2002/mmi/2012/emotionml-irp/#ackA

                [2] http://www.w3.org/2002/mmi/2012/emotionml-irp/#test_assertions



                -----Ursprüngliche Nachricht-----
                Von: Roddy Cowie [mailto:R.Cowie@qub.ac.uk]
                Gesendet: Samstag, 15. September 2012 16:16
                An: Burkhardt, Felix; kazemzad@usc.edu
                Cc: christian@becker-asano.de; patrick.gebhard@dfki.de;
                tim.llewellynn@nviso.ch; begolie@ornl.gov; schuller@tum.de;
                www-multimodal@w3.org
                Betreff: RE: Call for Implementations of EmotionML published

                Dear all,
                             We have uploaded a version of our trace program, Gtrace,
                which generates EmotionML outputs, to meet the revised deadline. The
                program and a manual can be downloaded from

                http://go.qub.ac.uk/GTrace


                Below is a sample output, which I have to say looks more intelligible than
                our old format.
                Roddy Cowie

                <emotionml version="1.0"
                xmlns="http://www.w3.org/2009/10/emotionml"
                xmlns:imdi="http://www.mpi.nl/IMDI/Schema/IMDI">
                <info>

                       <imdi:Actors>
                               <imdi:Actor>
                                       <imdi:Role>Annotator</imdi:Role>
                                       <imdi:Name>rc14913</imdi:Name>
                                </imdi:Actor>
                       </imdi:Actors>

                       <imdi:Session_Type>
                               <imdi:Date>14/09/2012</imdi:Date>
                               <imdi:Time>13:50</imdi:Time>
                               <imdi:Name>Bear.wmv</imdi:Name>
                        </imdi:Session_Type>

                </info>
                <emotion dimension-set="
                http://www.w3.org/TR/emotion-voc/xml#fsre-dimensions">
                       <dimension name="potency">
                               <trace
                                       freq="10Hz"
                                       samples="0.475 0.527 0.710 0.902 0.932 0.937 0.530
                0.163 0.091 0.122 0.628 0.645 0.639"/>
                       </dimension>
                       <reference uri="Bear.wmv#t=0.200,1.500"/> </emotion>

                </emotionml>

                ________________________________________
                From: Felix.Burkhardt@telekom.de [Felix.Burkhardt@telekom.de]
                Sent: Tuesday, August 28, 2012 1:55 PM
                To: kazemzad@usc.edu
                Cc: christian@becker-asano.de; patrick.gebhard@dfki.de;
                tim.llewellynn@nviso.ch; Roddy Cowie; begolie@ornl.gov; schuller@tum.de;
                www-multimodal@w3.org
                Subject: AW: Call for Implementations of EmotionML published

                Thanks Abe
                Yes please, submit to the list
                Cite Kaz:


                        Could you please send your implementation report to the MMI public list (


                www-multimodal@w3.org) as the EmotionML Candidate Recommendation
                announcement [1] says?

                [1] http://lists.w3.org/Archives/Public/www-multimodal/2012May/0010.html


                And interesting point you raise about the "no space" requirement, I myself
                was not aware of this and there is an example ("being hurt") in the
                official WD by Marc and Catherine [2] that includes a space.
                Also it says in the spec [3]:

                name: a name for the item, used to refer to this item. An <item> MUST NOT
                have the same name as any other <item> within the same <vocabulary>.

                So, I don't see a problem with spaces in names for the vocabulary.

                Cheers,
                Felix

                [2]http://www.w3.org/TR/2011/WD-emotion-voc-20110407/

                [3] http://www.w3.org/TR/2012/CR-emotionml-20120510/#s3.1.2



                Von: abe.kazemzadeh@gmail.com [mailto:abe.kazemzadeh@gmail.com] Im
                Auftrag von abe kazemzadeh
                Gesendet: Dienstag, 28. August 2012 09:07
                An: Burkhardt, Felix
                Cc: christian@becker-asano.de; patrick.gebhard@dfki.de;
                tim.llewellynn@nviso.ch; r.cowie@qub.ac.uk; begolie@ornl.gov;
                schuller@tum.de; marc.schroeder@dfki.de
                Betreff: Re: Call for Implementations of EmotionML published

                Hi Felix and all,
                Here's an implementation report for the EMO20Q agent that you demoed.
                I'm not sure if this is the right format for the report... Let me know if
                it need any fixing. I also have a human readable version (pdf) as well as
                the xml format given in http://www.w3.org/2002/mmi/2012/emotionml-irp/ .
                The only issue is that there were a few words in our vocabulary with
                spaces (eg, "let down"). As a computer readable format, it is possible to
                record this as "letDown", but if the no space requirement is not strongly
                motivated, I think it might make sense to accept spaces (e.g., "pissed
                off", "culture shock", or maybe "deja vu". It seems like multiword emotions
                eventually get lexicalized, like "homesick" or "carefree", but one could
                make the case for containing space in order to make the format more
                general).
                I wasn't sure if I should submit this report to the list mentioned on the
                specification site, www-multimodal@w3.org.  I haven't been following this
                list, so please let me know if should join and submit via the list or if
                the organizers here are collecting them off several threads, like this one.
                I'll be on vacation and traveling for the next 3 weeks but I should be
                able to get to email mostly within a day or so.
                Thanks,
                Abe
                On Thu, Aug 23, 2012 at 9:23 AM, abe kazemzadeh <abe.kazemzadeh@gmail.com>
                wrote:


                        Hi Felix,

                        On Thu, Aug 23, 2012 at 1:18 AM, <Felix.Burkhardt@telekom.de> wrote:


                                Congratulations, I just played it and it only took 12 questions to


                guess my emotion (jealousy). There was only one strange situation, when I
                first answered "no" on the question "is it like sadness?" and the next
                question was "is it sadness?".



                        Thanks for playing the emo20q demo! I'm glad it guessed correctly, but
                        you're right, there are some non-sequitur responses. I'm still trying
                        to decide whether more data or an improved algorithm will be the best
                        way to fix these...



                                So when will you send the report? Are you clear on the format?



                        I hope to send it soon. I've reviewed the report requirements
                        (http://www.w3.org/2002/mmi/2012/emotionml-irp/ ). It seems clear, so
                        no questions at the moment, but if there are any example reports
                        available that might help.



                                You're all aware we extended the deadline to mid September?



                        I wasn't aware of the extension, but that's great.



                                I'll be on the Eusipco conference in Bukarest next week in case anyone


                is also there and we could meet.



                        Have a good trip. I just checked with Shri, unfortunately no one from
                        SAIL is going to be at Eusipco this year.

                        Take care,
                        Abe



                                -----Ursprüngliche Nachricht-----
                                Von: abe kazemzadeh [mailto:abe.kazemzadeh@gmail.com]
                                Gesendet: Mittwoch, 22. August 2012 22:08
                                An: Burkhardt, Felix
                                Cc: christian@becker-asano.de; patrick.gebhard@dfki.de;
                                tim.llewellynn@nviso.ch; r.cowie@qub.ac.uk; begolie@ornl.gov;
                                schuller@tum.de; marc.schroeder@dfki.de
                                Betreff: Re: Call for Implementations of EmotionML published

                                Hi Felix,

                                I'm sorry that I was delayed with the implementation report for my
                                use of EmotionML. I just recently made a usable demo and if I could
                                still submit a report, I would be very glad if I could help with the
                                EmotionML effort. The demo is at
                                http://ark.usc.edu/~abe/wsgi_questioner . It basically uses the
                                EmotionML vocabulary idiom with a list of 110 emotion words for
                                implementing emotion twenty questions (EMO20Q). After each question,
                                the agent updates the probabilities/potentials associated with each
                                word and hopefully the belief update will narrow down the candidate
                                words (lower the entropy of the categorical distribution over the
                                vocabulary) so that the agent can guess the emotion word in less than
                                20 questions.

                                I would have submitted the report earlier, but it just wasn't ready.
                                Actually the EmotionML helped make the emo20q demo practically usable


                because earlier I had been serializing a big object in between the http
                requests, but now I only serialize an EmotionML vocabulary (with associated
                weights) and a dialog turn history.



                                Thanks,
                                Abe


                                On Mon, Jun 18, 2012 at 2:37 AM, <Felix.Burkhardt@telekom.de> wrote:


                                        Hi prospective implementers of EmotionML This is a reminder to
                                        deliver Implementation Reports until 10th August, I attach my own


                implementation report as a sample.


                                        Marc has sadly left us and I'm the new editor of EmotionML, so if you


                have any questions I'd be happy to assist you.


                                        It would be great to get some feedback on who actually works on


                implementation reports and when you think you can deliver.



                                        Regards,
                                        Felix

                                        I include Marc's last mail(s)

                                        -----Original mail-----
                                        Von: Marc Schroeder [mailto:marc.schroeder@dfki.de]
                                        Gesendet: Freitag, 11. Mai 2012 09:16
                                        An: Burkhardt, Felix; abe.kazemzadeh@gmail.com;
                                        christian@becker-asano.de; patrick.gebhard@dfki.de;
                                        tim.llewellynn@nviso.ch; r.cowie@qub.ac.uk; begolie@ornl.gov
                                        Betreff: Call for Implementations of EmotionML published

                                        Dear prospective implementors of EmotionML 1.0,

                                        the W3C has published the Candidate Recommendation and the Call for


                Implementations of EmotionML yesterday:


                                        http://www.w3.org/News/2012#entry-9449


                                        The specification as such has not changed much since the previous


                version, just some clarifications here and there:


                                        http://www.w3.org/TR/2012/CR-emotionml-20120510/


                                        The most relevant bit for you guys will be the Implementation Report


                Plan, in which we have basically listed as verifiable assertions the
                various properties that an implementation of different aspects of EmotionML
                should guarantee:


                                        http://www.w3.org/2002/mmi/2012/emotionml-irp/


                                        A key issue here might be to clarify whether you are implementing a


                "producer" and/or a "consumer" of EmotionML. In the Introduction of the
                Implementation Report Plan, e have tried to give clear descriptions what it
                means for a producer and a consumer to "pass", "fail" or "not-impl" a given
                assertion.




                                        I'll be happy to work with you in the next few weeks to clarify what


                needs to be done so that your implementation reports can help move
                EmotionML forward. Simply get back to me with any questions you have.



                                        I'd say if you think the question is of relevance to other prospective


                implementors, it should be OK to "reply all" to this email.



                                        I have just completed an implementation of an EmotionML checker in


                java, which performs a full validation of input documents with respect to
                all assertions in the IRP. Aspects of the specification that cannot be


                        verified through schema validation are verified through java code.


                                        This means that if the tool accepts any given document (or document


                fragment), I am reasonably confident it can be treated as valid EmotionML.


                                        I have placed the code in the public domain:

                                        https://github.com/marc1s/emotionml-checker-java





                                        Mit freundlichen Grüßen / Viele Grüße / Best Regards

                                        Felix Burkhardt

                                        Deutsche Telekom AG
                                        T-Labs (Research & Innovation)
                                        Dr. Felix Burkhardt
                                        Winterfeldtstr. 21, 10781 Berlin
                                        +4930835358136 (Tel.)
                                        +4952192100512 (Fax)
                                        E-Mail: felix.burkhardt@telekom.de
                                        www.telekom.com

                                        Erleben, was verbindet.

                                        Deutsche Telekom AG
                                        Aufsichtsrat: Prof. Dr. Ulrich Lehner (Vorsitzender)
                                        Vorstand: René Obermann (Vorsitzender), Dr. Manfred Balz, Reinhard
                                        Clemens, Niek Jan van Damme, Timotheus Höttges, Claudia Nemat, Prof.
                                        Dr. Marion Schick
                                        Handelsregister: Amtsgericht Bonn HRB 6794 Sitz der Gesellschaft:
                                        Bonn WEEE-Reg.-Nr. DE50478376

                                        Große Veränderungen fangen klein an - Ressourcen schonen und nicht


                jede E-Mail drucken.








        --
        Best Regards,

        Michael V. Liguori
        President - Founder
        What Are Minds For, Inc.
        201 280-1677
        mvliguori@whataremindsfor.com
        http://whataremindsfor.com


        PROPRIETARY: This e-mail contains proprietary information some or all of
        which may be legally privileged. It is intended for the recipient only. If
        an addressing or transmission error has misdirected this e-mail, please
        notify the authority by replying to this e-mail. If you are not the
        intended recipient you must not use, disclose, distribute, copy, print, or
        rely on this e-mail.

Received on Friday, 28 September 2012 14:30:58 UTC