


default search action
14th ICMI 2012: Santa Monica, CA, USA
- Louis-Philippe Morency, Dan Bohus, Hamid K. Aghajan, Justine Cassell, Anton Nijholt, Julien Epps:

International Conference on Multimodal Interaction, ICMI '12, Santa Monica, CA, USA, October 22-26, 2012. ACM 2012, ISBN 978-1-4503-1467-1
Keynote 1
- Charles Goodwin:

The co-operative, transformative organization of human action and knowledge. 1-2
Nonverbal / behaviour
- Mary Ellen Foster

, Andre Gaschler, Manuel Giuliani
, Amy Isard
, Maria Pateraki
, Ronald P. A. Petrick:
Two people walk into a bar: dynamic multi-party social interaction with a robot agent. 3-10 - Daniel Schulman

, Timothy W. Bickmore:
Changes in verbal and nonverbal conversational behavior in long-term interaction. 11-18 - Sunghyun Park, Jonathan Gratch, Louis-Philippe Morency:

I already know your answer: using nonverbal behaviors to predict immediate outcomes in a dyadic negotiation. 19-22 - Kyriaki Kalimeri

, Bruno Lepri, Oya Aran
, Dinesh Babu Jayagopi
, Daniel Gatica-Perez
, Fabio Pianesi:
Modeling dominance effects on nonverbal behaviors using granger causality. 23-26 - Yale Song, Louis-Philippe Morency, Randall Davis:

Multimodal human behavior analysis: learning correlation and interaction across modalities. 27-30
Affect
- Sidney K. D'Mello, Jacqueline M. Kory:

Consistent but modest: a meta-analysis on unimodal and multimodal affect detection accuracies from 30 studies. 31-38 - Ligia Maria Batrinca, Bruno Lepri, Nadia Mana, Fabio Pianesi:

Multimodal recognition of personality traits in human-computer collaborative tasks. 39-46 - Zakia Hammal, Jeffrey F. Cohn:

Automatic detection of pain intensity. 47-52 - Joan-Isaac Biel, Lucia Teijeiro-Mosquera, Daniel Gatica-Perez

:
FaceTube: predicting personality from facial expressions of emotion in online conversational video. 53-56
Demo session 1
- Pulkit Budhiraja, Sriganesh Madhvanath:

The blue one to the left: enabling expressive user interaction in a multimodal interface for object selection in virtual 3d environments. 57-58 - Ramadevi Vennelakanti, Sriganesh Madhvanath, Anbumani Subramanian

, Ajith Sowndararajan, Arun David, Prasenjit Dey:
Pixene: creating memories while sharing photos. 59-60 - Sriganesh Madhvanath, Ramadevi Vennelakanti, Anbumani Subramanian

, Ankit Shekhawat, Prasenjit Dey, Amit Ranjan:
Designing multiuser multimodal gestural interactions for the living room. 61-62 - Florian Nothdurft, Frank Honold, Peter Kurzok:

Using explanations for runtime dialogue adaptation. 63-64 - Seshadri Sridharan, Yun-Nung Chen, Kai-min Chang, Alexander I. Rudnicky

:
NeuroDialog: an EEG-enabled spoken dialog interface. 65-66 - Frank Honold, Felix Schüssel, Florian Nothdurft, Peter Kurzok:

Companion technology for multimodal interaction. 67-68
Poster session
- Gabriel Skantze

, Samer Al Moubayed:
IrisTK: a statechart-based toolkit for multi-party face-to-face interaction. 69-76 - Yukiko Nakano, Yuki Fukuhara:

Estimating conversational dominance in multiparty interaction. 77-84 - Melih Kandemir

, Samuel Kaski:
Learning relevance from natural eye movements in pervasive interfaces. 85-92 - Abdallah El Ali

, Johan Kildal
, Vuokko Lantz:
Fishing or a Z?: investigating the effects of error on mimetic and alphabet device-based gesture interaction. 93-100 - Chreston A. Miller

, Louis-Philippe Morency, Francis K. H. Quek:
Structural and temporal inference search (STIS): pattern identification in multimodal data. 101-108 - Rui Fang, Changsong Liu, Joyce Yue Chai:

Integrating word acquisition and referential grounding towards physical world interaction. 109-116 - Adam Faeth, Chris Harding:

Effects of modality on virtual button motion and performance. 117-124 - Gregor Ulrich Mehlmann, Elisabeth André

:
Modeling multimodal integration with event logic charts. 125-132 - Christian Schönauer, Kenichiro Fukushi, Alex Olwal, Hannes Kaufmann

, Ramesh Raskar:
Multimodal motion guidance: techniques for adaptive and dynamic feedback. 133-140 - Bo Xiao, Panayiotis G. Georgiou, Brian R. Baucom

, Shrikanth S. Narayanan:
Multimodal detection of salient behaviors of approach-avoidance in dyadic interactions. 141-144 - Joseph F. Grafsgaard, Robert M. Fulton, Kristy Elizabeth Boyer, Eric N. Wiebe, James C. Lester:

Multimodal analysis of the implicit affective channel in computer-mediated textual communication. 145-152 - Mihai Burzo, Daniel McDuff, Rada Mihalcea, Louis-Philippe Morency, Alexis Narvaez, Verónica Pérez-Rosas:

Towards sensing the influence of visual narratives on human affect. 153-160 - Kris Cuppens

, Chih-Wei Chen, Kevin Bing-Yung Wong, Anouk Van de Vel, Lieven Lagae, Berten Ceulemans, Tinne Tuytelaars
, Sabine Van Huffel, Bart Vanrumste
, Hamid K. Aghajan:
Integrating video and accelerometer signals for nocturnal epileptic seizure detection. 161-164 - Ioannis Giannopoulos

, Peter Kiefer
, Martin Raubal:
GeoGazemarks: providing gaze history for the orientation on small display maps. 165-172 - Anders Bouwer, Frank Nack, Abdallah El Ali

:
Lost in navigation: evaluating a mobile map app for a fair. 173-180 - Dale Cox, Justin Wolford, Carlos Jensen, Dedrie Beardsley:

An evaluation of game controllers and tablets as controllers for interactive tv applications. 181-188 - Rada Mihalcea, Mihai Burzo:

Towards multimodal deception detection - step 1: building a collection of deceptive videos. 189-192 - Soroush Vosoughi

, Matthew S. Goodwin, Bill Washabaugh, Deb Roy:
A portable audio/video recorder for longitudinal study of child development. 193-200 - Bernhard Andreas Brüning, Christian Schnier, Karola Pitsch, Sven Wachsmuth:

Integrating PAMOCAT in the research cycle: linking motion capturing and conversation analysis. 201-208
3 Vision
- Tianyu Huang

, Haiying Liu, Gangyi Ding:
Motion retrieval based on kinetic features in large motion database. 209-216 - Alexander Schick, Daniel Morlock, Christoph Amma, Tanja Schultz

, Rainer Stiefelhagen:
Vision-based handwriting recognition for unrestricted text input in mid-air. 217-220 - Samira Sheikhi, Jean-Marc Odobez

:
Investigating the midline effect for visual focus of attention recognition. 221-224 - Jun Wei, Adrian David Cheok

, Ryohei Nakatsu:
Let's have dinner together: evaluate the mediated co-dining experience. 225-228
Keynote 2
- Ivan Poupyrev:

Infusing the physical world into user interfaces. 229-230
Special session: child-computer interaction
- Anton Nijholt

:
Child-computer interaction: ICMI 2012 special session. 231-232 - Alissa Nicole Antle:

Knowledge gaps in hands-on tangible interaction research. 233-240 - Janet C. Read:

Evaluating artefacts with children: age and technology effects in the reporting of expected and experienced fun. 241-248 - Elisabeth M. A. G. van Dijk, Andreas Lingnau

, Hub Kockelkorn:
Measuring enjoyment of an interactive museum experience. 249-256 - Paulo Blikstein

:
Bifocal modeling: a study on the learning outcomes of comparing physical and computational models linked in real time. 257-264 - Yasmin B. Kafai, Deborah A. Fields:

Connecting play: understanding multimodal participation in virtual worlds. 265-272
Gestures
- Radu-Daniel Vatavu

, Lisa Anthony
, Jacob O. Wobbrock:
Gestures as point clouds: a $P recognizer for user interface prototypes. 273-280 - Magdalena Lis:

Influencing gestural representation of eventualities: insights from ontology. 281-288 - Laurent Son Nguyen, Jean-Marc Odobez

, Daniel Gatica-Perez
:
Using self-context for multimodal detection of head nods in face-to-face interactions. 289-292
Demo session 2
- Samer Al Moubayed, Gabriel Skantze

, Jonas Beskow, Kalin Stefanov
, Joakim Gustafson:
Multimodal multiparty social interaction with the furhat head. 293-294 - Helmut Lang, Florian Nothdurft:

An avatar-based help system for a grid computing web portal. 295-296 - Guillaume Chanel

, Kalogianni Konstantina, Thierry Pun:
GamEMO: how physiological signals show your emotions and enhance your game experience. 297-298 - Dragos Datcu

, Thomas Swart, Stephan G. Lukosch
, Zoltán Rusák
:
Multimodal collaboration for crime scene investigation in mediated reality. 299-300 - Bernhard Andreas Brüning, Christian Schnier:

PAMOCAT: linking motion capturing and conversation analysis. 301-302 - Patrick Ehlen, Michael Johnston:

Multimodal dialogue in mobile local search. 303-304
Doctoral spotlight session
- Mohammad Q. Azhar:

Toward an argumentation-based dialogue framework for human-robot collaboration. 305-308 - Crystal Chao:

Timing multimodal turn-taking for human-robot cooperation. 309-312 - Mohammed E. Hoque:

My automated conversation helper (MACH): helping people improve social skills. 313-316 - Gijs Huisman:

A touch of affect: mediated social touch and affect. 317-320 - Jyoti Joshi

:
Depression analysis: a multimodal approach. 321-324 - Katrin Wolf:

Design space for finger gestures with hand-held tablets. 325-328 - Christopher McMurrough:

Multi-modal interfaces for control of assistive robotic devices. 329-332 - Ross Mead:

Space, speech, and gesture in human-robot interaction. 333-336 - Maria F. O'Connor

:
Machine analysis and recognition of social contexts. 337-340 - Hae Won Park:

Task-learning policies for collaborative task solving in human-robot interaction. 341-344 - Daniele Ruscio

:
Simulating real danger?: validation of driving simulator test and psychological factors in brake response time to danger. 345-348 - Raghavi Sakpal:

Virtual patients to teach cultural competency. 349-352 - Marcelo Worsley:

Multimodal learning analytics: enabling the future of learning through multimodal data analysis and interfaces. 353-356 - Ying Yin:

A hierarchical approach to continuous gesture analysis for natural multi-modal interaction. 357-360
Grand challenge overview
- Björn W. Schuller

, Michel François Valstar
, Roddy Cowie
, Maja Pantic:
AVEC 2012: the continuous audio/visual emotion challenge - an introduction. 361-362 - Khe Chai Sim, Shengdong Zhao

, Kai Yu, Hank Liao:
ICMI'12 grand challenge: haptic voice recognition. 363-370 - Jordi Sanchez-Riera, Xavier Alameda-Pineda, Radu Horaud:

Audio-visual robot command recognition: D-META'12 grand challenge. 371-378 - Mannes Poel, Femke Nijboer

, Egon L. van den Broek, Stephen H. Fairclough
, Anton Nijholt
:
Brain computer interfaces as intelligent sensors for enhancing human-computer interaction. 379-382
Keynote 3
- Roberta L. Klatzky:

Using psychophysical techniques to design and evaluate multimodal interfaces: psychophysics and interface design. 383-384
Touch / taste
- Hendrik Richter, Doris Hausen, Sven Osterwald, Andreas Butz:

Reproducing materials of virtual elements on touchscreens using supplemental thermal feedback. 385-392 - Johan Kildal, Graham A. Wilson

:
Feeling it: the roles of stiffness, deformation range and feedback in the control of deformable ui. 393-400 - Yasmine N. El-Glaly

, Francis K. H. Quek, Tonya L. Smith-Jackson, Gurjot Dhillon:
Audible rendering of text documents controlled by multi-touch interaction. 401-408 - Nimesha Ranasinghe, Adrian David Cheok

, Ryohei Nakatsu:
Taste/IP: the sensation of taste for digital communication. 409-416
Multimodal interaction
- Oriol Vinyals, Dan Bohus, Rich Caruana:

Learning speaker, addressee and overlap detection models from multimodal streams. 417-424 - Shogo Okada

, Yusaku Sato, Yuki Kamiya, Keiji Yamada, Katsumi Nitta:
Analysis of the correlation between the regularity of work behavior and stress indices based on longitudinal behavioral data. 425-432 - Dinesh Babu Jayagopi

, Dairazalia Sanchez-Cortes, Kazuhiro Otsuka, Junji Yamato
, Daniel Gatica-Perez
:
Linking speaking and looking behavior patterns with group composition, perception, and performance. 433-440 - Dominik Ertl, Hermann Kaindl

:
Semi-automatic generation of multimodal user interfaces for dialogue-based interactive systems. 441-444 - Julie R. Williamson, Marilyn Rose McGee-Lennon, Stephen A. Brewster

:
Designing multimodal reminders for the home: pairing content with presentation. 445-448
Challenge 1: 2nd international audio/visual emotion challenge and workshop - AVEC 2012
- Björn W. Schuller

, Michel F. Valstar
, Florian Eyben, Roddy Cowie
, Maja Pantic:
AVEC 2012: the continuous audio/visual emotion challenge. 449-456 - Albert C. Cruz, Bir Bhanu

, Ninad Thakoor
:
Facial emotion recognition with expression energy. 457-464 - Michael Glodek, Martin Schels, Günther Palm, Friedhelm Schwenker:

Multiple classifier combination using reject options and markov fusion networks. 465-472 - Laurens van der Maaten:

Audio-visual emotion challenge 2012: a simple approach. 473-476 - Derya Ozkan, Stefan Scherer, Louis-Philippe Morency:

Step-wise emotion recognition using concatenated-HMM. 477-484 - Arman Savran

, Houwei Cao
, Miraj Shah, Ani Nenkova, Ragini Verma:
Combining video, audio and lexical indicators of affect in spontaneous conversation via particle filtering. 485-492 - Catherine Soladié, Hanan Salam

, Catherine Pelachaud
, Nicolas Stoiber, Renaud Séguier
:
A multimodal fuzzy inference system using a continuous facial expression representation for emotion detection. 493-500 - Jérémie Nicolle, Vincent Rapp, Kevin Bailly

, Lionel Prevost, Mohamed Chetouani
:
Robust continuous prediction of human emotions using multiscale dynamic cues. 501-508 - Pouria Fewzee, Fakhri Karray:

Elastic net for paralinguistic speech recognition. 509-516 - Florian Eyben, Björn W. Schuller

, Gerhard Rigoll:
Improving generalisation and robustness of acoustic affect recognition. 517-522 - Wenjing Han, Haifeng Li, Florian Eyben, Lin Ma, Jiayin Sun, Björn W. Schuller

:
Preserving actual dynamic trend of emotion in dimensional speech emotion recognition. 523-528 - Serdar Baltaci, Didem Gökçay:

Negative sentiment in scenarios elicit pupil dilation response: an auditory study. 529-532
Challenge 2: haptic voice recognition grand challenge
- Seungwhan Moon, Khe Chai Sim:

Design and implementation of the note-taking style haptic voice recognition for mobile devices. 533-538 - Hainan Xu, Yuchen Fan, Kai Yu:

Development of the 2012 SJTU HVR system. 539-544 - Guangsen Wang, Bo Li, Shilin Liu, Xuancong Wang, Xiaoxuan Wang, Khe Chai Sim:

Improving mandarin predictive text input by augmenting pinyin initials with speech and tonal information. 545-550 - Maryam Azh, Shengdong Zhao

:
LUI: lip in multimodal mobile GUI interaction. 551-554 - Khe Chai Sim:

Speak-as-you-swipe (SAYS): a multimodal interface combining speech and gesture keyboard synchronously for continuous mobile text entry. 555-560
Challenge 3: BCI grand challenge: brain-computer interfaces as intelligent sensors for enhancing human-computer interaction
- Alan T. Pope, Chad L. Stephens:

Interpersonal biocybernetics: connecting through social psychophysiology. 561-566 - Olexiy Kyrgyzov, Antoine Souloumiac:

Adaptive EEG artifact rejection for cognitive games. 567-570 - Stephen H. Fairclough

, Kiel Mark Gilleade:
Construction of the biocybernetic loop: a case study. 571-578 - Virginia R. de Sa:

An interactive control strategy is more robust to non-optimal classification boundaries. 579-586 - Danny Plass-Oude Bos, Hayrettin Gürkök, Boris Reuderink, Mannes Poel:

Improving BCI performance after classification. 587-594 - Matthew Weiden, Deepak Khosla, Matthew Keegan:

Electroencephalographic detection of visual saliency of motion towards a practical brain-computer interface for video analysis. 601-606
Workshop overview
- Ross Mead, Maha Salem:

Workshop on speech and gesture production in virtually and physically embodied conversational agents. 607-608 - Stefan Scherer, Marcelo Worsley, Louis-Philippe Morency:

1st international workshop on multimodal learning analytics: extended abstract. 609-610 - Yukiko I. Nakano, Kristiina Jokinen

, Hung-Hsuan Huang:
4th workshop on eye gaze in intelligent human machine interaction: eye gaze and multimodality. 611-612 - Antonio Camurri

, Donald Glowinski, Maurizio Mancini
, Giovanna Varni, Gualtiero Volpe
:
The 3rd international workshop on social behaviour in music: SBM2012. 613-614 - Anton Nijholt

, Leonardo Giusti, Andrea Minuto, Patrizia Marti
:
Smart material interfaces: a material step to the future. 615-616

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














