


default search action
ICMI 2021: Montréal, QC, Canada
- Zakia Hammal, Carlos Busso, Catherine Pelachaud

, Sharon L. Oviatt, Albert Ali Salah, Guoying Zhao:
ICMI '21: International Conference on Multimodal Interaction, Montréal, QC, Canada, October 18-22, 2021. ACM 2021, ISBN 978-1-4503-8481-0
Keynote Talks
- Karon E. MacLean:

Incorporating Haptics into the Theatre of Multimodal Experience design: and the Ecosystem this Requires. 1-2 - Susanne P. Lajoie:

Theory Driven Approaches to the Design of Multimodal Assessments of Learning, Emotion, and Self-Regulation in Medicine. 3 - Elisabeth André

:
Socially Interactive Artificial Intelligence: Past, Present and Future. 4 - Russ Salakhutdinov:

From Differentiable Reasoning to Self-supervised Embodied Active Learning. 5
Session 1: New Analytic and Machine Learning Techniques
- Wei Han, Hui Chen, Alexander F. Gelbukh, Amir Zadeh, Louis-Philippe Morency, Soujanya Poria

:
Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment Analysis. 6-15 - Lucien Maman

, Laurence Likforman-Sulem, Mohamed Chetouani, Giovanna Varni:
Exploiting the Interplay between Social and Task Dimensions of Cohesion to Predict its Dynamics Leveraging Social Sciences. 16-24 - Lauren Klein, Victor Ardulov, Alma Gharib, Barbara Thompson

, Pat Levitt, Maja J. Mataric:
Dynamic Mode Decomposition with Control as a Model of Multimodal Behavioral Coordination. 25-33 - Muhammad Umer Anwaar, Rayyan Ahmad Khan, Zhihui Pan, Martin Kleinsteuber:

A Contrastive Learning Approach for Compositional Zero-Shot Learning. 34-42 - Zhongwei Xie, Ling Liu

, Lin Li, Luo Zhong:
Efficient Deep Feature Calibration for Cross-Modal Joint Embedding Learning. 43-51
Session 2: Support for Health, Mental Health and Disability
- Hashini Senaratne, Levin Kuhlmann

, Kirsten Ellis
, Glenn Melvin
, Sharon L. Oviatt:
A Multimodal Dataset and Evaluation for Feature Estimators of Temporal Phases of Anxiety. 52-61 - Masaki Matsuo, Takahiro Miura, Ken-ichiro Yabu, Atsushi Katagiri, Masatsugu Sakajiri, Junji Onishi, Takeshi Kurata, Tohru Ifukube:

Inclusive Action Game Presenting Real-time Multimodal Presentations for Sighted and Blind Persons. 62-70 - Georgios Pantazopoulos, Jeremy Bruyere, Malvina Nikandrou, Thibaud Boissier, Supun Hemanthage, Binha Kumar Sachish, Vidyul Shah, Christian Dondrup, Oliver Lemon

:
ViCA: Combining visual, Social, and Task-orientedconversational AI in a Healthcare Setting. 71-79 - Dhruv Jain, Sasa Junuzovic, Eyal Ofek

, Mike Sinclair, John R. Porter, Chris Yoon, Swetha Machanavajhala, Meredith Ringel Morris:
Towards Sound Accessibility in Virtual Reality. 80-91 - Elisa Ramil Brick, Vanesa Caballero Alonso, Conor O'Brien, Sheron Tong, Emilie Tavernier, Amit Parekh, Angus Addlesee, Oliver Lemon

:
Am I Allergic to This? Assisting Sight Impaired People in the Kitchen. 92-102 - Samantha Speer

, Emily Hamner, Michael Tasota, Lauren Zito, Sarah K. Byrne-Houser:
MindfulNest: Strengthening Emotion Regulation with Tangible User Interfaces. 103-111
Session 3: Conversation, Dialogue Systems and Language Analytics
- Dimosthenis Kontogiorgos

, Minh Tran, Joakim Gustafson, Mohammad Soleymani:
A Systematic Cross-Corpus Analysis of Human Reactions to Robot Conversational Failures. 112-120 - Bernd Dudzik

, Simon Columbus
, Tiffany Matej Hrkalovic
, Daniel Balliet, Hayley Hung:
Recognizing Perceived Interdependence in Face-to-Face Negotiations through Multimodal Analysis of Nonverbal Behavior. 121-130 - Matthias Kraus

, Nicolas Wagner, Wolfgang Minker:
Modelling and Predicting Trust for Developing Proactive Dialogue Strategies in Mixed-Initiative Interaction. 131-140 - Yuki Hirano, Shogo Okada

, Kazunori Komatani:
Recognizing Social Signals with Weakly Supervised Multitask Learning for Multimodal Dialogue Systems. 141-149 - Felix Gervits, Gordon Briggs, Antonio Roque, Genki A. Kadomatsu, Dean Thurston, Matthias Scheutz, Matthew Marge:

Decision-Theoretic Question Generation for Situated Reference Resolution: An Empirical Study and Computational Model. 150-158
Session 4: Speech, Gesture and Haptics
- Riku Arakawa, Zendai Kashino, Shinnosuke Takamichi, Adrien Verhulst, Masahiko Inami

:
Digital Speech Makeup: Voice Conversion Based Altered Auditory Feedback for Transforming Self-Representation. 159-167 - Shkurta Gashi, Aaqib Saeed, Alessandra Vicini, Elena Di Lascio, Silvia Santini:

Hierarchical Classification and Transfer Learning to Recognize Head Gestures and Facial Expressions Using Earbuds. 168-176 - Siyang Wang, Simon Alexanderson, Joakim Gustafson, Jonas Beskow, Gustav Eje Henter, Éva Székely:

Integrated Speech and Gesture Synthesis. 177-185 - Angela Chan, Francis K. H. Quek, Takashi Yamauchi

, Jinsil Hwaryoung Seo:
Co-Verbal Touch: Enriching Video Telecommunications with Remote Touch Technology. 186-194 - Gloria Dhandapani

, Jamie Iona Ferguson, Euan Freeman:
HapticLock: Eyes-Free Authentication for Mobile Devices. 195-202 - Kern Qi

, David Borland
, Emily Brunsen, James Minogue, Tabitha C. Peck:
The Impact of Prior Knowledge on the Effectiveness of Haptic and Visual Modalities for Teaching Forces. 203-211
Session 5: Behavioral Analytics and Applications
- Junseok Park, Kwanyoung Park, Hyunseok Oh, Ganghun Lee, Min Su Lee, Youngki Lee, Byoung-Tak Zhang:

Toddler-Guidance Learning: Impacts of Critical Period on Multimodal AI Agents. 212-220 - Huda Alsofyani, Alessandro Vinciarelli:

Attachment Recognition in School Age Children Based on Automatic Analysis of Facial Expressions and Nonverbal Vocal Behaviour. 221-228 - Aishat Aloba, Lisa Anthony:

Characterizing Children's Motion Qualities: Implications for the Design of Motion Applications for Children. 229-238 - Jian Huang, Zehang Lin, Zhenguo Yang, Wenyin Liu:

Temporal Graph Convolutional Network for Multimodal Sentiment Analysis. 239-247 - Sydney Thompson, Abhijit Gupta, Anjali W. Gupta, Austin Chen, Marynel Vázquez

:
Conversational Group Detection with Graph Neural Networks. 248-252 - Shuvendu Roy

, Ali Etemad:
Self-supervised Contrastive Learning of Multi-view Facial Expressions. 253-257
Session 6: Multimodal Ethics, Interfaces and Applications
- Halim Acosta, Nathan L. Henderson, Jonathan P. Rowe

, Wookhee Min, James Minogue, James C. Lester:
What's Fair is Fair: Detecting and Mitigating Encoded Bias in Multimodal Models of Museum Visitor Attention. 258-267 - Brandon M. Booth

, Louis Hickman, Shree Krishna Subburaj, Louis Tay, Sang Eun Woo, Sidney K. D'Mello:
Bias and Fairness in Multimodal Machine Learning: A Case Study of Automated Video Interviews. 268-277 - Sharon L. Oviatt:

Technology as Infrastructure for Dehumanization: : Three Hundred Million People with the Same Face. 278-287 - Abdullah Aman Tutul, Ehsanul Haque Nirjhar, Theodora Chaspari:

Investigating Trust in Human-Machine Learning Collaboration: A Pilot Study on Estimating Public Anxiety from Speech. 288-296 - Laura Pruszko

, Yann Laurillau
, Benoît Piranda, Julien Bourgeois, Céline Coutrix:
Impact of the Size of Modules on Target Acquisition and Pursuit for Future Modular Shape-changing Physical User Interfaces. 297-307 - Frederik Wiehr, Anke Hirsch, Lukas Schmitz, Nina Knieriemen, Antonio Krüger

, Alisa Kovtunova
, Stefan Borgwardt, Ernie Chang, Vera Demberg, Marcel Steinmetz
, Jörg Hoffmann:
Why Do I Have to Take Over Control? Evaluating Safe Handovers with Advance Notice and Explanations in HAD. 308-317
Posters
- Amr Gomaa

, Guillermo Reyes, Michael Feld
:
ML-PersRef: A Machine Learning-based Personalized Multimodal Fusion Approach for Referencing Outside Objects From a Moving Vehicle. 318-327 - Chathurika Jayangani Palliya Guruge, Sharon L. Oviatt, Pari Delir Haghighi, Elizabeth Pritchard:

Advances in Multimodal Behavioral Analytics for Early Dementia Diagnosis: A Review. 328-340 - Anna Penzkofer

, Philipp Müller, Felix Bühler, Sven Mayer
, Andreas Bulling:
ConAn: A Usable Tool for Multimodal Conversation Analysis. 341-351 - Shumpei Otsuchi, Yoko Ishii, Momoko Nakatani, Kazuhiro Otsuka:

Prediction of Interlocutors' Subjective Impressions Based on Functional Head-Movement Features in Group Meetings. 352-360 - Kazuki Takeda, Kazuhiro Otsuka:

Inflation-Deflation Networks for Recognizing Head-Movement Functions in Face-to-Face Conversations. 361-369 - Takashi Mori, Kazuhiro Otsuka:

Deep Transfer Learning for Recognizing Functional Interactions via Head Movements in Multiparty Conversations. 370-378 - Jamie Iona Ferguson, Euan Freeman, Stephen A. Brewster:

Investigating the Effect of Polarity in Auditory and Vibrotactile Displays Under Cognitive Load. 379-386 - Shaun Alexander Macdonald

, Euan Freeman, Stephen A. Brewster, Frank E. Pollick
:
User Preferences for Calming Affective Haptic Stimuli in Social Settings. 387-396 - Jicheng Li, Anjana Bhat, Roghayeh Barmaki:

Improving the Movement Synchrony Estimation with Action Quality Assessment in Children Play Therapy. 397-406 - Beibin Li, Nicholas Nuechterlein, Erin Barney, Claire E. Foster, Minah Kim, Monique Mahony, Adham Atyabi

, Li Feng, Quan Wang, Pamela Ventola, Linda G. Shapiro, Frederick Shic:
Learning Oculomotor Behaviors from Scanpath. 407-415 - Kapotaksha Das, Salem Sharak, Kais Riani, Mohamed Abouelenien

, Mihai Burzo, Michalis Papakostas:
Multimodal Detection of Drivers Drowsiness and Distraction. 416-424 - Weichen Wang

, Jialing Wu, Subigya Kumar Nepal
, Alex daSilva, Elin Hedlund, Eilis Murphy, Courtney Rogers, Jeremy F. Huckins
:
On the Transition of Social Interaction from In-Person to Online: Predicting Changes in Social Media Usage of College Students during the COVID-19 Pandemic based on Pre-COVID-19 On-Campus Colocation. 425-434 - Surbhi Madan, Monika Gahalawat, Tanaya Guha, Ramanathan Subramanian

:
Head Matters: Explainable Human-centered Trait Prediction from Head Motion Dynamics. 435-443 - Zhang Guo, Kangsoo Kim

, Anjana Bhat, Roghayeh Barmaki:
An Automated Mutual Gaze Detection Framework for Social Behavior Assessment in Therapy for Children with Autism. 444-452 - Diego Monteiro, Hai-Ning Liang

, Xian Wang
, Wenge Xu
, Huawei Tu:
Design and Development of a Low-cost Device for Weight and Center of Gravity Simulation in Virtual Reality. 453-460 - Farkhandah Komal Aziz

, Chris Creed
, Maite Frutos-Pascual
, Ian Williams:
Inclusive Voice Interaction Techniques for Creative Object Positioning. 461-469 - May Jorella S. Lazaro, Sung Ho Kim, Jaeyong Lee, Jaemin Chun, Myung Hwan Yun

:
Interaction Modalities for Notification Signals in Augmented Reality. 470-477 - Carlos Bermejo Fernandez, Lik Hang Lee

, Petteri Nurmi
, Pan Hui:
PARA: Privacy Management and Control in Emerging IoT Ecosystems using Augmented Reality. 478-486 - Faye McCabe

, Christopher Baber
:
Feature Perception in Broadband Sonar Analysis - Using the Repertory Grid to Elicit Interface Designs to Support Human-Autonomy Teaming. 487-493 - Pieter Wolfert, Jeffrey M. Girard

, Taras Kucherenko
, Tony Belpaeme:
To Rate or Not To Rate: Investigating Evaluation Methods for Generated Co-Speech Gestures. 494-502 - Ahmed Hussen Abdelaziz, Anushree Prasanna Kumar, Chloe Seivwright, Gabriele Fanelli, Justin Binder, Yannis Stylianou, Sachin Kajareker:

Audiovisual Speech Synthesis using Tacotron2. 503-511 - Jaewook Lee, Sebastian S. Rodriguez, Raahul Natarrajan, Jacqueline Chen, Harsh Deep, Alex Kirlik:

What's This? A Voice and Touch Multimodal Approach for Ambiguity Resolution in Voice Assistants. 512-520 - Jianfeng Wu, Sijie Mai, Haifeng Hu

:
Graph Capsule Aggregation for Unaligned Multimodal Sequences. 521-529 - Xinmeng Chen, Xuchen Gong

, Ming Cheng, Qi Deng, Ming Li:
Cross-modal Assisted Training for Abnormal Event Recognition in Elevators. 530-538 - Filip Bendevski, Jumana Ibrahim, Tina Krulec, Theodore Waters, Nizar Habash

, Hanan Salam
, Himadri Mukherjee, Christin Camia
:
Towards Automatic Narrative Coherence Prediction. 539-547 - Qinpei Zhao, Xiongbaixue Yan, Yinjia Zhang, Weixiong Rao, Jiangfeng Li, Chao Mi, Jessie Chen:

TaxoVec: Taxonomy Based Representation for Web User Profiling. 548-556 - Esaú Villatoro-Tello

, Gabriela Ramírez-de-la-Rosa, Daniel Gática-Pérez
, Mathew Magimai-Doss
, Héctor Jiménez-Salazar:
Approximating the Mental Lexicon from Clinical Interviews as a Support Tool for Depression Detection. 557-566 - Matthew Rueben, Mohammad Syed, Emily London, Mark Camarena, Eunsook Shin, Yulun Zhang

, Timothy S. Wang, Thomas R. Groechel, Rhianna Lee, Maja J. Mataric:
Long-Term, in-the-Wild Study of Feedback about Speech Intelligibility for K-12 Students Attending Class via a Telepresence Robot. 567-576 - Andy Kong

, Karan Ahuja, Mayank Goel, Chris Harrison:
EyeMU Interactions: Gaze + IMU Gestures on Mobile Devices. 577-585 - Wenqing Wei, Sixia Li, Shogo Okada

, Kazunori Komatani:
Multimodal User Satisfaction Recognition for Non-task Oriented Dialogue Systems. 586-594 - Jayaprakash Akula

, Abhishek Sharma, Rishabh Dabral, Preethi Jyothi, Ganesh Ramakrishnan:
Cross Lingual Video and Text Retrieval: A New Benchmark Dataset and Algorithm. 595-603 - Carl-Philipp Hellmuth, Miroslav Bachinski, Jörg Müller:

Interaction Techniques for 3D-positioning Objects in Mobile Augmented Reality. 604-612 - Öykü Zeynep Bayramoglu

, Engin Erzin
, Tevfik Metin Sezgin, Yücel Yemez:
Engagement Rewarded Actor-Critic with Conservative Q-Learning for Speech-Driven Laughter Backchannel Generation. 613-618 - Hao Wu, Gareth James Francis Jones, François Pitié

:
Knowing Where and What to Write in Automated Live Video Comments: A Unified Multi-Task Approach. 619-627 - Marissa A. Thompson, Lynette Tan, Cecilia Soto, Jaitra Dixit, Mounia Ziat

:
Tomato Dice: A Multimodal Device to Encourage Breaks During Work. 628-635 - Chiara Mazzocconi, Vladislav Maraev

, Vidya Somashekarappa, Christine Howes:
Looking for Laughs: Gaze Interaction with Laughter Pragmatics and Coordination. 636-644 - Sarala Padi, Seyed Omid Sadjadi, Ram D. Sriram, Dinesh Manocha:

Improved Speech Emotion Recognition using Transfer Learning and Spectrogram Augmentation. 645-652 - Nils Heitmann, Thomas Rosner, Samarjit Chakraborty

:
Mass-deployable Smartphone-based Objective Hearing Screening with Otoacoustic Emissions. 653-661 - Arshad Nasser

, Kexin Zheng
, Kening Zhu
:
ThermEarhook: Investigating Spatial Thermal Haptic Feedback on the Auricular Skin Area. 662-672 - Özge Alaçam

, Ganeshan Malhotra, Eugen Ruppert, Chris Biemann:
Gaze-based Multimodal Meaning Recovery for Noisy / Complex Environments. 673-681 - Lisai Zhang

, Qingcai Chen, Joanna Siebert, Buzhou Tang:
Semi-supervised Visual Feature Integration for Language Models through Sentence Visualization. 682-686 - Ya Zhao, Cheng Ma, Zunlei Feng, Mingli Song:

Speech Guided Disentangled Visual Representation Learning for Lip Reading. 687-691 - Euan Freeman:

Enhancing Ultrasound Haptics with Parametric Audio Effects. 692-696 - Euan Freeman, Graham A. Wilson

:
Perception of Ultrasound Haptic Focal Point Motion. 697-701 - Yante Li

, Guoying Zhao:
Intra- and Inter-Contrastive Learning for Micro-expression Action Unit Detection. 702-706 - Patrik Jonell, Youngwoo Yoon

, Pieter Wolfert, Taras Kucherenko
, Gustav Eje Henter:
HEMVIP: Human Evaluation of Multiple Videos in Parallel. 707-711 - Ehsanul Haque Nirjhar, Amir H. Behzadan, Theodora Chaspari:

Knowledge- and Data-Driven Models of Multimodal Trajectories of Public Speaking Anxiety in Real and Virtual Settings. 712-716 - Sanket Kumar Thakur, Cigdem Beyan

, Pietro Morerio, Alessio Del Bue:
Predicting Gaze from Egocentric Social Interaction Videos and IMU Data. 717-722 - Tanvi Deshpande, Nitya Mani

:
An Interpretable Approach to Hateful Meme Detection. 723-727 - Torsten Wörtwein, Lisa B. Sheeber, Nicholas B. Allen, Jeffrey F. Cohn, Louis-Philippe Morency:

Human-Guided Modality Informativeness for Affective States. 728-734 - Georgiana Cristina Dobre

, Marco Gillies, Patrick Falk, Jamie A. Ward, Antonia F. de C. Hamilton, Xueni Pan:
Direct Gaze Triggers Higher Frequency of Gaze Change: An Automatic Analysis of Dyads in Unstructured Conversation. 735-739 - Katsutoshi Masai

, Akemi Kobayashi, Toshitaka Kimura:
Online Study Reveals the Multimodal Effects of Discrete Auditory Cues in Moving Target Estimation Task. 740-744 - Thuong-Khanh Tran, Quang Nhat Vo, Guoying Zhao:

DynGeoNet: Fusion Network for Micro-expression Spotting. 745-749 - Namkyoo Kang, SeungJoon Kwon, JongChan Lee, Sang-Woo Seo:

Earthquake Response Drill Simulator based on a 3-DOF Motion base in Augmented Reality. 750-752 - Benedikt Hosp, Myat Su Yin, Peter Haddawy, Ratthaphum Watcharopas, Paphon Sa-Ngasoongsong, Enkelejda Kasneci:

States of Confusion: Eye and Head Tracking Reveal Surgeons' Confusion during Arthroscopic Surgery. 753-757 - Daisuke Kamisaka, Yuichi Ishikawa

:
Personality Prediction with Cross-Modality Feature Projection. 758-762 - Kosmas Kritsis

, Aggelos Gkiokas, Aggelos Pikrakis, Vassilis Katsouros:
Attention-based Multimodal Feature Fusion for Dance Motion Generation. 763-767 - Yashish M. Siriwardena, Carol Y. Espy-Wilson, Chris Kitchen, Deanna L. Kelly:

Multimodal Approach for Assessing Neuromotor Coordination in Schizophrenia Using Convolutional Neural Networks. 768-772 - Dushyant Singh Chauhan, Gopendra Vikram Singh, Navonil Majumder, Amir Zadeh, Asif Ekbal, Pushpak Bhattacharyya, Louis-Philippe Morency, Soujanya Poria

:
M2H2: A Multimodal Multiparty Hindi Dataset For Humor Recognition in Conversations. 773-777
Blue Sky Papers
- Alex Pentland:

Optimized Human-AI Decision Making: A Personal Perspective. 778-780 - Philippe A. Palanque, David Navarre

:
Dependability and Safety: Two Clouds in the Blue Sky of Multimodal Interaction. 781-787 - Björn W. Schuller, Tuomas Virtanen

, Maria Riveiro, Georgios Rizos, Jing Han, Annamaria Mesaros
, Konstantinos Drossos
:
Towards Sonification in Multimodal and User-friendlyExplainable Artificial Intelligence. 788-792
Doctoral Consortium Papers
- Laduona Dai

:
Photogrammetry-based VR Interactive Pedagogical Agent for K12 Education. 793-796 - Gopika Ajaykumar:

Assisted End-User Robot Programming. 797-801 - Christopher Acornley:

Using Generative Adversarial Networks to Create Graphical User Interfaces for Video Games. 802-806 - Selina Meyer:

Natural Language Stage of Change Modelling for "Motivationally-driven" Weight Loss Support. 807-811 - Patrick O'Toole

:
Understanding Personalised Auditory-Visual Associations in Multi-Modal Interactions. 812-816 - Yuanchao Li:

Semi-Supervised Learning for Multimodal Speech and Emotion Recognition. 817-821 - Jieyeon Woo:

Development of an Interactive Human/Agent Loop using Multimodal Recurrent Neural Networks. 822-826 - Liu Yang:

What If I Interrupt You. 827-831 - Marianna Di Gregorio:

Accessible Applications - Study and Design of User Interfaces to Support Users with Disabilities. 832-834
Demo and Exhibit Papers
- Fumio Nihei, Yukiko I. Nakano:

Web-ECA: A Web-based ECA Platform. 835-836 - Ferdinand Fuhrmann, Anna Maria Weber

, Stefan Ladstätter, Stefan Dietrich, Johannes Rella:
Multimodal Interaction in the Production Line - An OPC UA-based Framework for Injection Molding Machinery. 837-838 - Antoine Weill-Duflos, Nicholas Ong, Felix Desourdy, Benjamin Delbos, Steve Ding, Colin R. Gallacher:

Haply 2diy: An Accessible Haptic Plateform Suitable for Remote Learning. 839-840 - Nancie Gunson

, Daniel Hernández García
, Jose L. Part, Yanchao Yu, Weronika Sieinska
, Christian Dondrup, Oliver Lemon
:
Combining Visual and Social Dialogue for Human-Robot Interaction. 841-842 - Kai-min Kevin Chang, Yueran Yuan:

Introducing an Integrated VR Sensor Suite and Cloud Platform. 843-845 - Chee Wee Leong, Xianyang Chen, Vinay Basheerabad, Chong Min Lee, Patrick Houghton:

NLP-guided Video Thin-slicing for Automated Scoring of Non-Cognitive, Behavioral Performance Tasks. 846-847 - Javier Mikel Olaso, Alain Vázquez, Leila Ben Letaifa

, Mikel de Velasco
, Aymen Mtibaa, Mohamed Amine Hmani, Dijana Petrovska-Delacrétaz, Gérard Chollet, César Montenegro, Asier López-Zorrilla, Raquel Justo, Roberto Santana, Jofre Tenorio-Laranga, Eduardo González-Fraile, Begoña Fernández-Ruanova
, Gennaro Cordasco
, Anna Esposito
, Kristin Beck Gjellesvik, Anna Torp Johansen, Maria Stylianou Korsnes, Colin Pickard, Cornelius Glackin, Gary Cahalane, Pau Buch-Cardona, Cristina Palmero, Sergio Escalera
, Olga Gordeeva, Olivier Deroo, Anaïs Fernández, Daria Kyslitska, José Antonio Lozano, María Inés Torres
, Stephan Schlögl:
The EMPATHIC Virtual Coach: a demo. 848-851
Workshop Summaries
- Zakia Hammal, Nadia Berthouze

, Steffen Walter:
Automated Assessment of Pain. 852 - Hiroki Tanaka

, Satoshi Nakamura, Jean-Claude Martin, Catherine Pelachaud:
2nd Workshop on Social Affective Multimodal Interaction for Health (SAMIH). 853-854 - Joseph A. Allen, Hayley Hung, Joann Keyton, Gabriel Murray, Catharine Oertel, Giovanna Varni:

Insights on Group and Team Dynamics. 855-856 - Béatrice Biancardi

, Eleonora Ceccaldi
, Chloé Clavel, Mathieu Chollet, Tanvi Dinkar:
CATS2021: International Workshop on Corpora And Tools for Social skills annotation. 857-859 - Dennis Küster

, Felix Putze
, David St-Onge, Pascal E. Fortin, Nerea Urrestilla, Tanja Schultz
:
3rd Workshop on Modeling Socio-Emotional and Cognitive Processes from Multimodal Data in the Wild. 860-861 - Saeid Safavi, Heysem Kaya

, Roy S. Hessels, Maryam Najafian, Sandra Hanekamp:
2nd ICMI Workshop on Bridging Social Sciences and AI for Understanding Child Behaviour. 862-863 - Dongyan Huang, Björn W. Schuller, Jianhua Tao, Lei Xie, Jie Yang:

ASMMC21: The 6th International Workshop on Affective Social Multimedia Computing. 864-867 - Michal Muszynski, Edgar Roman-Rangel, Leimin Tian, Theodoros Kostoulas, Theodora Chaspari, Panos Amelidis:

Workshop on Multimodal Affect and Aesthetic Experience. 868-869 - Cigdem Turan, Dorothea Koert, Karl David Neergaard, Rudolf Lioutikov

:
Empowering Interactive Robots by Learning Through Multimodal Feedback Channel. 870-871 - Taras Kucherenko

, Patrik Jonell, Youngwoo Yoon
, Pieter Wolfert, Zerrin Yumak, Gustav Eje Henter:
GENEA Workshop 2021: The 2nd Workshop on Generation and Evaluation of Non-verbal Behaviour for Embodied Agents. 872-873 - Oya Çeliktutan

, Alexandra Livia Georgescu
, Nicholas Cummins
:
Socially Informed AI for Healthcare: Understanding and Generating Multimodal Nonverbal Cues. 874-876

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














