


default search action
14th HCI 2011: Orlando, Florida, USA
- Julie A. Jacko:

Human-Computer Interaction. Interaction Techniques and Environments - 14th International Conference, HCI International 2011, Orlando, FL, USA, July 9-14, 2011, Proceedings, Part II. Lecture Notes in Computer Science 6762, Springer 2011, ISBN 978-3-642-21604-6
Touch-Based and Haptic Interaction
- Katsuhito Akahane, Takeo Hamada

, Takehiko Yamaguchi, Makoto Sato:
Development of a High Definition Haptic Rendering for Stability and Fidelity. 3-12 - Onur Asan

, Mark Omernick, Dain Peer, Enid N. H. Montague
:
Designing a Better Morning: A Study on Large Scale Touch Interface Design. 13-22 - Andreas Haslbeck, Severina Popova, Michael Krause, Katrina Pecot, Jürgen Mayer, Klaus Bengler:

Experimental Evaluations of Touch Interaction Considering Automotive Requirements. 23-32 - Rachelle Kristof Hippler, Dale S. Klopfer, Laura M. Leventhal, G. Michael Poor, Brandi A. Klein, Samuel D. Jaffee:

More than Speed? An Empirical Study of Touchscreens and Body Awareness on an Object Manipulation Task. 33-42 - Chih-Pin Hsiao, Brian R. Johnson:

TiMBA - Tangible User Interface for Model Building and Analysis. 43-52 - Heng Jiang, Teng-Wen Chang

, Cha-Lin Liu:
Musical Skin: A Dynamic Interface for Musical Performance. 53-61 - Steven L. Johnson, Yueqing Li, Chang Soo Nam

, Takehiko Yamaguchi:
Analyzing User Behavior within a Haptic System. 62-70 - Markus Jokisch, Thomas Bartoschek

, Angela Schwering:
Usability Testing of the Interaction of Novices with a Multi-touch Table in Semi Public Space. 71-80 - Gimpei Kimioka, Buntarou Shizuki, Jiro Tanaka:

Niboshi for Slate Devices: A Japanese Input Method Using Multi-touch for Slate Devices. 81-89 - Karsten Nebe, Tobias Müller, Florian Klompmaker:

An Investigation on Requirements for Co-located Group-Work Using Multitouch-, Pen-Based- and Tangible-Interaction. 90-99 - Karsten Nebe, Florian Klompmaker, Helge Jung, Holger Fischer:

Exploiting New Interaction Techniques for Disaster Control Management Using Multitouch-, Tangible- and Pen-Based-Interaction. 100-109 - Eckard Riedenklau, Thomas Hermann

, Helge J. Ritter:
Saving and Restoring Mechanisms for Tangible User Interfaces through Tangible Active Objects. 110-118 - Seungjae Shin, Wanjoo Park

, Hyunchul Cho, Se Hyung Park, Laehyun Kim:
Needle Insertion Simulator with Haptic Feedback. 119-124 - Roland Spies, Andreas Blattner, Christian Lange, Martin Wohlfarter, Klaus Bengler, Werner Hamberger:

Measurement of Driver's Distraction for an Early Prove of Concepts in Automotive Industry at the Example of the Development of a Haptic Touchpad. 125-132 - Hiroshi Takeda, Hidetoshi Miyao, Minoru Maruyama, David K. Asano:

A Tabletop-Based Real-World-Oriented Interface. 133-139 - Sehat Ullah

, Xianging Liu, Samir Otmane
, Paul Richard, Malik Mallem
:
What You Feel Is What I Do: A Study of Dynamic Haptic Interaction in Distributed Collaborative Virtual Environment. 140-147 - Andy Wu, Jayraj Jog, Sam Mendenhall, Ali Mazalek

:
A Framework Interweaving Tangible Objects, Surfaces and Spaces. 148-157 - Takehiko Yamaguchi, Damien Chamaret, Paul Richard:

The Effect of Haptic Cues on Working Memory in 3D Menu Selection. 158-166
Gaze and Gesture-Based Interaction
- Eimad E. A. Abusham

, Housam K. Bashir:
Face Recognition Using Local Graph Structure (LGS). 169-175 - Kiyohiko Abe, Shoichi Ohi, Minoru Ohyama:

Eye-gaze Detection by Image Analysis under Natural Light. 176-184 - Leonardo Angelini

, Maurizio Caon
, Stefano Carrino, Omar Abou Khaled
, Elena Mugellini
:
Multi-user Pointing and Gesture Interaction for Large Screen Using Infrared Emitters and Accelerometers. 185-193 - Ryosuke Aoki, Yutaka Karatsu, Masayuki Ihara, Atsuhiko Maeda, Minoru Kobayashi, Shingo Kagami:

Gesture Identification Based on Zone Entry and Axis Crossing. 194-203 - Florin Barbuceanu, Csaba Antonya

, Mihai Duguleana
, Zoltán Rusák
:
Attentive User Interface for Interaction within Virtual Reality Environments Based on Gaze Analysis. 204-213 - Mohamed-Ikbel Boulabiar, Thomas Burger

, Franck Poirier, Gilles Coppin:
A Low-Cost Natural User Interaction Based on a Camera Hand-Gestures Recognizer. 214-221 - Francesco Carrino, Julien Tscherrig, Elena Mugellini

, Omar Abou Khaled
, Rolf Ingold
:
Head-Computer Interface: A Multimodal Approach to Navigate through Real and Virtual Worlds. 222-230 - Seung-Hwan Choi, Ji-Hyeong Han, Jong-Hwan Kim:

3D-Position Estimation for Hand Gesture Interface Using a Single Camera. 231-237 - Shaowei Chu

, Jiro Tanaka:
Hand Gesture for Taking Self Portrait. 238-247 - Chin-Shyurng Fahn, Keng-Yu Chu:

Hidden-Markov-Model-Based Hand Gesture Recognition Techniques Used for a Human-Robot Interaction System. 248-258 - Masashi Inoue

, Toshio Irino, Nobuhiro Furuyama, Ryoko Hanada, Takako Ichinomiya, Hiroyasu Massaki:
Manual and Accelerometer Analysis of Head Nodding Patterns in Goal-oriented Dialogues. 259-267 - Jun-Sung Lee, Chi-Min Oh, Chil-Woo Lee:

Facial Expression Recognition Using AAMICPF. 268-274 - Jui-Feng Lin

, Colin G. Drury:
Verification of Two Models of Ballistic Movements. 275-284 - Wei Lun Ng, Ng Chee Kyun, Nor Kamariah Noordin, Borhanuddin Mohd Ali:

Gesture Based Automating Household Appliances. 285-293 - Chi-Min Oh, Md Zahidul Islam, Jun-Sung Lee, Chil-Woo Lee, In-So Kweon:

Upper Body Gesture Recognition for Human-Robot Interaction. 294-303 - Gie-seo Park, Jong-gil Ahn, Gerard Jounghyun Kim:

Gaze-Directed Hands-Free Interface for Mobile Interaction. 304-313 - Yuzo Takahashi, Shoko Koshi:

Eye-Movement-Based Instantaneous Cognition Model for Non-verbal Smooth Closed Figures. 314-322
Voice, Natural Language and Dialogue
- David Byer, Colin Depradine:

VOSS -A Voice Operated Suite for the Barbadian Vernacular. 325-330 - Darius Dadgari, Wolfgang Stuerzlinger

:
New Techniques for Merging Text Versions. 331-340 - Iris K. Howley, Carolyn Penstein Rosé

:
Modeling the Rhetoric of Human-Computer Interaction. 341-350 - Itaru Kuramoto, Atsushi Yasuda, Mitsuru Minakuchi, Yoshihiro Tsujino:

Recommendation System Based on Interaction with Multiple Agents for Users with Vague Intention. 351-357 - Florian Metze

, Alan W. Black, Tim Polzehl:
A Review of Personality in Voice-Based Man Machine Interaction. 358-367 - Mai Miyabe, Takashi Yoshino:

Can Indicating Translation Accuracy Encourage People to Rectify Inaccurate Translations? 368-377 - Shun Ozaki, Takuo Matsunobe, Takashi Yoshino, Aguri Shigeno:

Design of a Face-to-Face Multilingual Communication System for a Handheld Device in the Medical Field. 378-386 - Sven Schmeier, Matthias Rebel, Renlong Ai:

Computer Assistance in Bilingual Task-Oriented Human-Human Dialogues. 387-395 - Xian Zhang, Rico Andrich, Dietmar F. Rösner:

Developing and Exploiting a Multilingual Grammar for Human-Computer Interaction. 396-405
Novel Interaction Techniques and Devices
- Sheng-Han Chen, Teng-Wen Chang

, Sheng-Cheng Shih:
Dancing Skin: An Interactive Device for Motion. 409-416 - Günter Edlinger, Clemens Holzner, Christoph Guger:

A Hybrid Brain-Computer Interface for Smart Home Control. 417-426 - Tor-Morten Grønli

, Jarle Hansen, Gheorghita Ghinea
:
Integrated Context-Aware and Cloud-Based Adaptive Home Screens for Android Phones. 427-435 - Shinichi Ike

, Saya Yokoyama, Yuya Yamanishi, Naohisa Matsuuchi, Kazunori Shimamura, Takumi Yamaguchi, Haruya Shiba:
Evaluation of User Support of a Hemispherical Sub-display with GUI Pointing Functions. 436-445 - Srinivasan Jayaraman, Venkatesh Balasubramanian

:
Uni-model Human System Interface Using sEMG. 446-453 - Alexey Karpov

, Andrey Ronzhin
, Irina S. Kipyatkova:
An Assistive Bi-modal User Interface Integrating Multi-channel Speech Recognition and Computer Vision. 454-463 - Dong-Kyu Kim, Yong-Wan Roh, Kwang-Seok Hong:

A Method of Multiple Odors Detection and Recognition. 464-473 - Jacquelyn Ford Morie, Eric Chance, J. Galen Buckwalter:

Report on a Preliminary Study Using Breath Control and a Virtual Jogging Scenario as Biofeedback for Resilience Training. 474-480 - Shrishail Patki, Bernard Grundlehner

, Toru Nakada, Julien Penders:
Low Power Wireless EEG Headset for BCI Applications. 481-490 - Sheng Kai Tang, Wen Chieh Tseng, Wei Wen Luo, Kuo Chung Chiu, Sheng Ta Lin, Yen Ping Liu:

Virtual Mouse: A Low Cost Proximity-Based Gestural Pointing Device. 491-499 - Yun Zhou, Bertrand David, René Chalon

:
Innovative User Interfaces for Wearable Computers in Real Augmented Environment. 500-509
Avatars and Embodied Interaction
- Yugo Hayashi, Victor V. Kryssanov, Kazuhisa Miwa, Hitoshi Ogawa:

Influence of Prior Knowledge and Embodiment on Human-Agent Interaction. 513-522 - Myounghoon Jeon

, Infantdani A. Rayan:
The Effect of Physical Embodiment of an Animal Robot on Affective Prosody Recognition. 523-532 - Wi-Suk Kwon

, Veena Chattaraman, Soo In Shim, Hanan Alnizami, Juan E. Gilbert
:
Older User-Computer Interaction on the Internet: How Conversational Agents Can Help. 533-536 - Helmut Lang, Christian Mosch, Bastian Boegel, David Michel Benoit, Wolfgang Minker:

An Avatar-Based Help System for Web-Portals. 537-546 - Szu-Chia Lu, Nicole Blackwell, Ellen Yi-Luen Do

:
mediRobbi: An Interactive Companion for Pediatric Patients during Hospital Visit. 547-556 - Yuichi Murata, Kazutaka Kurihara, Toshio Mochizuki, Buntarou Shizuki, Jiro Tanaka:

Design of Shadows on the OHP Metaphor-Based Presentation Interface Which Visualizes a Presenter's Actions. 557-564 - Toshiya Naka, Toru Ishida

:
Web-Based Nonverbal Communication Interface Using 3DAgents with Natural Gestures. 565-574 - Pim Nauts, Willem A. van Doesburg, Emiel Krahmer, Anita H. M. Cremers:

Taking Turns in Flying with a Virtual Wingman. 575-584 - Kentaro Okamoto, Michiya Yamamoto, Tomio Watanabe:

A Configuration Method of Visual Media by Using Characters of Audiences for Embodied Sport Cheering. 585-592 - G. Michael Poor, Robert J. K. Jacob:

Introducing Animatronics to HCI: Extending Reality-Based Interaction. 593-602 - Yuya Takao, Michiya Yamamoto, Tomio Watanabe:

Development of Embodied Visual Effects Which Expand the Presentation Motion of Emphasis and Indication. 603-612 - Kaori Tanaka, Tatsunori Matsui, Kazuaki Kojima:

Experimental Study on Appropriate Reality of Agents as a Multi-modal Interface for Human-Computer Interaction. 613-622

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














