default search action
15th AVSP 2019: Melbourne, Australia
- Chris Davis:
15th International Conference on Auditory-Visual Speech Processing, AVSP 2019, Melbourne, Australia, August 10-11, 2019. ISCA 2019
Modality, stress and atypical processing
- Marisa Cruz, Marc Swerts, Sónia Frota:
Do visual cues to interrogativity vary between language modalities? Evidence from spoken Portuguese and Portuguese Sign Language. 1-5 - Lieke van Maastricht, Marieke Hoetjes, Ellen van Drie:
Do gestures during training facilitate L2 lexical stress acquisition by Dutch learners of Spanish? 6-10 - V. Dogu Erdener, Sefik Evren Erdener, Arzu Yordaml:
Auditory-visual speech perception in bipolar disorder: behavioural data and physiological predictions. 11-15
Emotion 1
- Krishna D. N, Sai Sumith Reddy:
Multi-Modal Speech Emotion Recognition Using Speech Embeddings and Audio Features. 16-20 - Darshana Priyasad, Tharindu Fernando, Simon Denman, Sridha Sridharan, Clinton Fookes:
Learning Salient Features for Multimodal Emotion Recognition with Recurrent Neural Networks and Attention Based Fusion. 21-26 - Hisako W. Yamamoto, Misako Kawahara, Akihiro Tanaka:
The Development of Eye Gaze Patterns during Audiovisual Perception of Affective and Phonetic Information. 27-32
Emotion 2
- Chris Davis, Jeesun Kim:
Auditory and Visual Emotion Recognition: Investigating why some portrayals are better recognized than others. 33-37 - Jimmy Debladis, Kuzma Strelnikov, Shally Marc, Maïthé Tauber, Pascal Barone:
Unbalanced visuo-auditory interactions for gender and emotions processing. 38-42
Children/infants
- Sok Hui Jessica Tan, Denis Burnham:
Auditory-Visual Speech Segmentation in Infants. 43-46 - Rebecca Holt, Laurence Bruggeman, Katherine Demuth:
Audiovisual benefits for speech processing speed among children with hearing loss. 47-52 - Sok Hui Jessica Tan, Michael J. Crosse, Giovanni M. Di Liberto, Denis Burnham:
Four-Year-Olds' Cortical Tracking to Continuous Auditory-Visual Speech. 53-56
Visual speech processing
- Tomomi Mizuochi-Endo, Michiru Makuuchi:
Neural processing of degraded speech using speaker's mouth movement. 57-62 - April Shi Min Ching, Jeesun Kim, Chris Davis:
Auditory-Visual Integration During the Attentional Blink. 63-68 - Denis Burnham, Weicong Li, Christopher Carignan, Virginie Attina, Benjawan Kasisopa, Eric Vatikiotis-Bateson:
Visual Correlates of Thai Lexical Tone Production: Motion of the Head, Eyebrows and Larynx? 69-72
Artificial agents/smart devices
- Girija Chetty, Matthew White:
Embodied Conversational Agents and Interactive Virtual Humans for Training Simulators. 73-77 - Angelika Hönemann, Casey Bennett, Petra Wagner, Selma Sabanovic:
Audio-visual synthesized attitudes presented by the German speaking robot SMiRAE. 78-83 - Takeshi Saitoh, Michiko Kubokawa:
LiP25w: Word-level Lip Reading Web Application for Smart Device. 84-88
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.