


default search action
12th ICMI / 7. MLMI 2010: Beijing, China
- Wen Gao, Chin-Hui Lee, Jie Yang, Xilin Chen, Maxine Eskénazi, Zhengyou Zhang:

Proceedings of the 12th International Conference on Multimodal Interfaces / 7. International Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010, Beijing, China, November 8-12, 2010. ACM 2010, ISBN 978-1-4503-0414-6
Invited talk
- John B. Haviland:

Language and thought: talking, gesturing (and signing) about space. 1:1
Multimodal systems
- Topi Kaaresoja, Stephen A. Brewster

:
Feedback is... late: measuring multimodal delays in mobile device touchscreen interaction. 2:1-2:8 - Iwan de Kok, Derya Ozkan, Dirk Heylen, Louis-Philippe Morency:

Learning and evaluating response prediction models using parallel listener consensus. 3:1-3:8 - Hui Zhang, Damian Fricker, Thomas G. Smith, Chen Yu

:
Real-time adaptive behaviors in multimodal human-avatar interactions. 4:1-4:8 - Dan Bohus, Eric Horvitz:

Facilitating multiparty dialog with gaze, gesture, and speech. 5:1-5:8
Gaze and interaction
- Boris Schauerte, Gernot A. Fink

:
Focusing computational visual attention in multi-modal human-robot interaction. 6:1-6:8 - Bruno Lepri, Subramanian Ramanathan

, Kyriaki Kalimeri
, Jacopo Staiano
, Fabio Pianesi, Nicu Sebe
:
Employing social gaze and speaking activity for automatic determination of the Extraversion trait. 7:1-7:8 - Weifeng Li, Marc-Antoine Nüssli, Patrick Jermann

:
Gaze quality assisted automatic recognition of social contexts in collaborative Tetris. 8:1-8:8 - Nikolaus Bee, Johannes Wagner, Elisabeth André

, Thurid Vogt, Fred Charles
, David Pizzi, Marc Cavazza
:
Discovering eye gaze behavior during human-agent conversation in an interactive storytelling application. 9:1-9:8
Demo session
- Patrick Ehlen, Michael Johnston:

Speak4it: multimodal interaction for local search. 10:1-10:2 - Luis Rodríguez, Ismael García-Varea

, Alejandro Revuelta-Martínez, Enrique Vidal:
A multimodal interactive text generation system. 11:1-11:2 - Jonathan Kilgour, Jean Carletta, Steve Renals

:
The Ambient Spotlight: personal multimodal search without query. 12:1-12:2 - Chunhui Zhang, Min Wang, Richard Harper

:
Cloud mouse: a new way to interact with the cloud. 13:1-13:2
Invited talk
- Richard Ashley:

Musical performance as multimodal communication: drummers, musical collaborators, and listeners. 14:1
Gesture and accessibility
- Ying Yin, Randall Davis:

Toward natural interaction in the real world: real-time gesture recognition. 15:1-15:8 - Julie Rico, Stephen A. Brewster

:
Gesture and voice prototyping for early evaluations of social acceptability in multimodal interfaces. 16:1-16:9 - Yun Li, Xiang Chen, Jianxun Tian, Xu Zhang, Kongqiao Wang, Jihai Yang:

Automatic recognition of sign language subwords based on portable accelerometer and EMG sensors. 17:1-17:7 - Francisco Oliveira, Heidi Cowan, Bing Fang, Francis K. H. Quek:

Enabling multimodal discourse for the blind. 18:1-18:8
Multimodal interfaces
- Koji Kamei, Kazuhiko Shinozawa, Tetsushi Ikeda, Akira Utsumi, Takahiro Miyashita, Norihiro Hagita:

Recommendation from robots in a real-world retail shop. 19:1-19:8 - Marco Blumendorf, Dirk Roscher, Sahin Albayrak

:
Dynamic user interface distribution for flexible multimodal interaction. 20:1-20:8 - Johan Kildal:

3D-press: haptic illusion of compliance when pressing on a rigid surface. 21:1-21:8
Human-centered HCI
- Abdallah El Ali

, Frank Nack, Lynda Hardman
:
Understanding contextual factors in location-aware multimedia messaging. 22:1-22:8 - Qiong Liu, Chunyuan Liao, Lynn Wilcox, Anthony Dunnigan:

Embedded media barcode links: optimally blended barcode overlay on paper for linking to associated media. 23:1-23:8 - Wenchang Xu, Xin Yang, Yuanchun Shi:

Enhancing browsing experience of table and image elements in web pages. 24:1-24:8 - Ya-Xi Chen, Michael Reiter, Andreas Butz:

PhotoMagnets: supporting flexible browsing and searching in photo collections. 25:1-25:8 - Peng-Wen Chen, Snehal Kumar Chennuru, Senaka Buthpitiya, Ying Zhang:

A language-based approach to indexing heterogeneous multimedia lifelog. 26:1-26:8 - Kaiming Li, Lei Guo, Carlos Faraco, Dajiang Zhu, Fan Deng, Tuo Zhang, Xi Jiang

, Degang Zhang, Hanbo Chen, Xintao Hu
, L. Stephen Miller, Tianming Liu:
Human-centered attention models for video summarization. 27:1-27:8
Invited talk
- James A. Landay:

Activity-based Ubicomp: a new research basis for the future of human-computer interaction. 28:1
Speech and language
- Salil Deena, Shaobo Hou, Aphrodite Galata

:
Visual speech synthesis by modelling coarticulation dynamics using a non-parametric switching state-space model. 29:1-29:8 - Luis Rodríguez, Ismael García-Varea

, Enrique Vidal:
Multi-modal computer assisted speech transcription. 30:1-30:7 - Stefanie Tellex, Thomas Kollar, George Shaw, Nicholas Roy, Deb Roy:

Grounding spatial language for video search. 31:1-31:8 - Patrick Ehlen, Michael Johnston:

Location grounding in multimodal local search. 32:1-32:4
Poster session
- Kazutaka Kurihara, Toshio Mochizuki, Hiroki Oura, Mio Tsubakimoto, Toshihisa Nishimori, Jun Nakahara:

Linearity and synchrony: quantitative metrics for slide-based presentation methodology. 33:1-33:4 - Myunghee Lee, Gerard J. Kim:

Empathetic video experience through timely multimodal interaction. 34:1-34:4 - Toni Pakkanen, Roope Raisamo

, Katri Salminen
, Veikko Surakka
:
Haptic numbers: three haptic representation models for numbers on a touch screen phone. 35:1-35:4 - Juan Cheng, Xiang Chen, Zhiyuan Lu, Kongqiao Wang, Minfen Shen:

Key-press gestures recognition and interaction based on SEMG signals. 36:1-36:4 - Kaihui Mu, Jianhua Tao, Jianfeng Che, Minghao Yang:

Mood avatar: automatic text-driven head motion synthesis. 37:1-37:4 - Matthew J. Pitts

, Gary E. Burnett
, Mark A. Williams, Tom Wellings:
Does haptic feedback change the way we view touchscreens in cars? 38:1-38:4 - Dairazalia Sanchez-Cortes, Oya Aran

, Marianne Schmid Mast, Daniel Gatica-Perez
:
Identifying emergent leadership in small groups using nonverbal communicative cues. 39:1-39:4 - Wen Dong

, Alex Pentland:
Quantifying group problem solving with stochastic analysis. 40:1-40:4 - Natalie Ruiz, Qian Qian Feng, Ronnie Taib

, Tara Handke, Fang Chen
:
Cognitive skills learning: pen input patterns in computer-based athlete training. 41:1-41:4 - Koray Tahiroglu, Teemu Tuomas Ahmaniemi:

Vocal sketching: a prototype tool for designing multimodal interaction. 42:1-42:4 - Masahiro Tada, Haruo Noma

, Kazumi Renge:
Evidence-based automated traffic hazard zone mapping using wearable sensors. 43:1-43:4 - Yasuyuki Sumi, Masaharu Yano, Toyoaki Nishida:

Analysis environment of conversational structure with nonverbal multimodal data. 44:1-44:4 - Rongrong Wang, Francis K. H. Quek, James Keng Soon Teh, Adrian David Cheok

, Sep Riang Lai:
Design and evaluation of a wearable remote social touch device. 45:1-45:4 - Vicent Alabau, Daniel Ortiz-Martínez

, Alberto Sanchís, Francisco Casacuberta:
Multimodal interactive machine translation. 46:1-46:4 - Jean-Yves Lionel Lawson, Mathieu Coterot, Cyril Carincotte, Benoît Macq:

Component-based high fidelity interactive prototyping of post-WIMP interactions. 47:1-47:4 - Nicolás Serrano, Adrià Giménez, Alberto Sanchís, Alfons Juan:

Active learning strategies for handwritten text transcription. 48:1-48:4 - Yuting Chen, Adeel Naveed, Robert Porzel:

Behavior and preference in minimal personality: a study on embodied conversational agents. 49:1-49:4 - Joan-Isaac Biel, Daniel Gatica-Perez

:
Vlogcast yourself: nonverbal behavior and attention in social media. 50:1-50:4
Human-human interactions
- Michael Voit, Rainer Stiefelhagen:

3D user-perspective, voxel-based estimation of visual focus of attention in dynamic meeting scenarios. 51:1-51:8 - Sergio Escalera

, Petia Radeva
, Jordi Vitrià
, Xavier Baró
, Bogdan Raducanu:
Modelling and analyzing multimodal dyadic interactions using social networks. 52:1-52:8 - Shohei Hidaka, Chen Yu

:
Analyzing multimodal time series as dynamical systems. 53:1-53:8 - Sebastian Gorga, Kazuhiro Otsuka:

Conversation scene analysis based on dynamic Bayesian network and image-based gaze detection. 54:1-54:8

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














