>... [[http://enterface08.limsi.fr/|eNTERFACE '08]] Workshop

Organizer: Christian Jacquemin LIMSI-CNRS and University Paris 11

Capture and machine learning of physiological signals for emotion recognition: applications to performing arts and multimedia scenography

August 4-29, 2008, LIMSI-CNRS, Orsay, France

Abstract:

The purpose of this workshop is to bring together scientists, technicians, and artists from the performing arts (dance, theater, performance…) for the study and recognition of emotions from captured physiological signals. It will focus on the use of perceptual feedback to enhance the perception by users of their own emotions and the ability to control them. Preceding issues of eNTERFACE have already offered workshops on the connection of physiological signals and expression of emotions. When compared with preceding issues, more importance will be given in this workshop to the automatic recognition of emotions through signal processing (regular physiological signals such as blood pressure, respiration, hear rate, skin conductivity, or more structured signals such as voice or facial expression) and on the study of the perceptual loop through reinforcement or contrary feedback. The workshop will offer a playful experience of the combination of artistic and scientific activities with a genuine cross-fertilization between creative performances and state-of-the-art techniques and algorithms for signal analysis, prediction, and classification.

Project objective:

The workshop will provide the participants with background knowledge, minimal skills, and practice of physiological sensors for live performance.

It will be organized around the following activities:

Background information:

Jonghwa Kim, “Robust Speech Recognition and Understanding”, ed. M. Grimm, K. Kroschel, CH. Bimodal Emotion Recognition using Speech and Physiological Changes , 265-280, ISBN 978-3-902613-08-0, I-Tech Education and Publishing, Vienna, Austria, 2007.

Jonghwa Kim, Elisabeth André, Multi-channel Biosignal Analysis for Automatic Emotion Recognition, International Conference on Bio-inspired Systems and Signal Processing, Jan. 2008, Madeira, Portugal.

Detailed technical description:

Technical description


Resources needed:

Project management: will be refined with the participants

>... Work Plan and Implementation Schedule

The schedule of the workshop could be the following for the first week:

and progressively evolve to reach the following target schedule for the fourth week:

Benefits of the research:

A live presentation of the outcome of the workshop by performers will be scheduled in October in Le Cube, a culture center close to Paris. Other artistic presentations can be considered in other countries, as well as the submission to festivals or exhibitions in media arts.

This workshop will follow a series of four workshops that are organized between September 2007 and May 2008 by LIMSI-CNRS in collaboration with other reasearchers, and with artists, technicians, and programmers in culture centers. Because of this past experience, LIMSI-CNRS will be able to provide sensors, a set of networked PCs with the appropriate softwares, and will be able to transfer the skills that will be acquired during the four workshops to the participants of the eNTERFACE workshop.

Intentions of participation:

doodle

>... Participants

Organization: Christian Jacquemin LIMSI-CNRS (graphics, affective computing)

External Scientific participants

Artistic participants

LIMSI Scientific participants

>... Additional details on external participants' skills & expectations

- Capturing physiological signals: Recording signals like blood pressure, respiration, hear rate, skin conductivity, voice and facial expressions. I didn't quite understand what the source will be for th ese signals. Are the participating artists going to help on this matter? They can give pictures / sound recordings for different emotions but what about the other signals? Another thing is, in the last enterface, my team was working on creating a database of 3D models and pictures of 80-100 different people's faces showing different emotions. Maybe using this database can also be helpful.

- Recognition of emotions: This is where the signal processing and recognition algorithms take part. The captured signals are to be transformed into useful information. For different kinds of signals, different methods can be used or multiple signals can be interpreted together. I worked with face images and 3d models, for identification and synthetic gesture generation. I guess, 3D data is not in our scope so I'd gladly work with images. Additionally, in my job, I am working on processing radar signals. The domain is quite different nevertheless I am familiar with 1D signal processi ng and I can easily study and learn more prior to the workshop. It can be useful to assign particular algorithms to the participants, to get prepared and maybe to make small evaluations on their usability.

- Using the derived information: This part is a bit vague in my head. In the project proposal, it says that the information will be used to enhance the perception by users of their own emotions and the ability to control them. In your e-mail, it says the signals will also be used for image and sound synthesis through expressive animated faces, spatialized multimedia environments and fluid 3D shapes. So, is the enhancement of perception going to be done via these methods or this is just a different branch of the project? If so, can we propose different ways to use the analyzed signals?

About the skills I expect to get to gain from the workshop… For the whole workshop, including all other projects, I am looking forward to expand my vision in human-computer interaction and machine learning. In the last workshop, I found that, working and exchanging ideas people which are getting together from different places with different perspectives, is pretty fruitful. And for the project itself, I am expecting to discover the ways of combining technology and arts and interactive entertainment. I addition, I am willing to learn more effective algorithms for signal analysis and recognition with such a good hands-on experience.

Personally, I'm more interested in the sound aspect as that relates to my upcoming project which i'll be working on from september. imposing biosignal data to the sound (changing the sound quality by imposing the behavior of biosignal.)

Title of ideal research project: emotional sound tuning system based on captured biosignal

as an artist, I would like to use the workshop as an opportunity to see the process(different approaches) of captured physiological data being translated into an artistic language. therefore, my focus will be on finding out technical possibilities of processing and capturing biosignals and learning how to operate basic things myself. Besides, i would like to get acquainted with signal processing and also to see different way of embedding the physiological data into the performing arts.

I believe I can offer artistic skills in finalizing the experimental designs. As an artist, I'm not familiar with programing however I have some experience dealing with hardware.

CV

web

The “Prunes électriques” (Electric Plums) company was founded in 1999 by Fabienne Gotusso, at the time when she staged and directed her own text Juliette. Right from the beginning, the company's work has combined singing, speaking, dancing and body movements with the use of musical instruments, toys, drawings, photos, video, light, sound, news technologies… The objective is to come up with new ways of writing that favour interactions between the various materials. To try and find potential extensions, breaks, and tensions, and put these at the centre of the physical experience on stage. The use of sensors on the body during directed improvisation helps to reveal a hidden process. Questioning the relevance of such experiment as a material for study as well as stage and dramatic writing. http://fabiennegot.wordpress.com/

I have a signal processing background and worked mainly on speech signal processing.

Although I am open to working on any particular area under the topic, the following projects would be interesting to me:

  1. Statistical methods for multi-modal emotion classification/detection.
  2. Effect of feedback (sonification or visual) of physiological signals to the production of emotion in art.
  3. Automatic content generation using physiological signals. (typically music or speech production parameters)

I expect to gain the following skills:

  1. Acquisition/Processing systems for different physiological signals.
  2. Design of live 3-D sound/graphics
  3. Understanding of emotion cognition and perception

I have some experience in signal processing and statistical machine learning especially in speech signals.

CV

My research is orientated more on signal processing. The topic of my PhD thesis is emotion recognition from visual source.

I would like to see the combination of signals, from visual, acoustic and other resources,and how it will influence the result of emotion recognition.

I expect to get familiar with other types of signals that can be used for emotion recognition and some basic about graphics rendering from biometrics signal.

I am familiar mostly with image processing but i have also some background on signal processing.

CV

I'm a undergraduate computer engineering student at Koc University. I'm interested in Human-Computer interaction and signal processing.

I have taken signals and systems, interactive computer graphics and introduction to artificial intelligence courses. Unfortunately I have no real experience on signal processing; however I'm willing to learn and open to working on any particular area.

I expect to gain experience on signal processing and recognition. Also as I'm a candidate graduate student I also like to meet and know people already working on interesting research areas.

CV

I joined enterface07 and had a close look at the bio-group and their performance since i am a performer focusing on sound and visuals…

Title of ideal research project: audio visual representation of human inner organs through mapping bio-signal

Skills expected to gain during the workshop: signal processing, the mechanism of emotion recognition

Skills that can be shared: MAX/MSP, processing, physical computing(arduino, AVR programming), building hardwares and instruments

CV


I am a performer working at the time on finding relation/connection between different performance media: sonic, visual, movement; the relation based on the circulation of data derived from physical (e.g. voice) signals, captured in real-time.

Title of ideal research project: Researching the possibilities (methods/procedures) of mapping (translating) data derived from bio-signals to performance media, its application to live performance. or: Deriving substantial data from bio-signals and mapping/translating them in real-time, as controlling data, into different performance media.

Skills expected to gain:

performance)

I have experience in audio signal processing and analyzing (e.g. mapping spectrum to MIDI), also in live situation as a performer. I work mainly with Max/MSP. I am familiar with RTC-lab, FTM, Arduino; have also some experience in Super Collider and Mac Csound, AC Toolbox (algorithmic composition environment), voltage control concept, algorithmic composition.

CV

I've been at work for some time on a fiction for video, also for computer, set in the 1930's, and involving telepathy… here's a link: Lost Tribes. It is a story told by telepathy, by live narrators using machines [live inside the movie, live outside it in performance versions of the stories ]. I'm interested in latter day tools, stories with them, and stories about them.

things I think are interesting: 1: sound/voice spatialization; possibly attaching sound fields to motion capture files 2: voice processing… how about based on meaning? [speech recognition, face recognition] 3: live control of the above two, based on a clear rhetoric [ie clear presentation, seperate elements easy to distinguish]… \ etc… ie lots of things….

On the technical end, I know video, realtime 3d [some of my skills are a bit dated , but useful nonetheless], and arduino/physical computing. Familiar with pd, processing, etc. I like DIY.

>... Additional details on LIMSI participants' skills & expectations

My current research is based on the relation between sound and graphics rendering for designing immersive environments. Furthermore I am very interested in audio-graphics real-time rendering, movement and bio-signal capture. And the biosignal capture could be useful not only for artistic projects but also for some experiments. For me, it's a way to mesure consequences of some kind of renderings on the user body.

As I have no knowledge about biosignal capture to date, I need to learn the basis of this method.

I can present the tools and the method I use for modeling 3D audio-graphics interactive scenes. This package (named SceneModeler) is a way to link two platforms: VirtualChoreographer for the command and the scene modeling and Max/MSP for the sound rendering.