default search action
Search dblp
Full-text search
- > Home
Please enter a search query
- case-insensitive prefix search: default
e.g., sig matches "SIGIR" as well as "signal" - exact word search: append dollar sign ($) to word
e.g., graph$ matches "graph", but not "graphics" - boolean and: separate words by space
e.g., codd model - boolean or: connect words by pipe symbol (|)
e.g., graph|network
Update May 7, 2017: Please note that we had to disable the phrase search operator (.) and the boolean not operator (-) due to technical problems. For the time being, phrase search queries will yield regular prefix search result, and search terms preceded by a minus will be interpreted as regular (positive) search terms.
Author search results
no matches
Venue search results
no matches
Refine list
refine by author
- no options
- temporarily not available
refine by venue
- no options
- temporarily not available
refine by type
- no options
- temporarily not available
refine by access
- no options
- temporarily not available
refine by year
- no options
- temporarily not available
Publication search results
found 334 matches
- 2024
- Zhe Chen, Dongjie Yun, Jia Zhou:
Handwriting on Touch Screens through Fingers and Stylus: Investigating the Optimal Size of the Input Box for Younger and Older Adults. Int. J. Hum. Comput. Interact. 40(14): 3597-3606 (2024) - Tao Jin, Jiamin He, Wenrui Wang, Zhengxin Wu, Haoran Gu:
How Mobile Touch Devices Foster Cognitive Offloading in the Elderly: The Effects of Input and Feedback. Int. J. Hum. Comput. Interact. 40(7): 1658-1668 (2024) - Chentao Li, Jinyang Yu, Ke He, Jianjiang Feng, Jie Zhou:
SwivelTouch: Boosting Touchscreen Input with 3D Finger Rotation Gesture. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 8(2): 53:1-53:30 (2024) - Takahiro Kawabe, Yusuke Ujitoko:
Softness Perception of Visual Objects Controlled by Touchless Inputs: The Role of Effective Distance of Hand Movements. IEEE Trans. Vis. Comput. Graph. 30(7): 4154-4169 (2024) - Juyoung Lee, Minju Baeck, Hui-Shyong Yeo, Thad Starner, Woontack Woo:
GestureMark: Shortcut Input Technique using Smartwatch Touch Gestures for XR Glasses. AHs 2024: 63-71 - Paige S. DeVries, Nina Tran, Keith Delk, Melanie Miga, Richard Carlisle Taulbee, Pranav Pidathala, Abraham Glasser, Raja S. Kushalnagar, Christian Vogler:
Sign Language-Based versus Touch-Based Input for Deaf Users with Interactive Personal Assistants in Simulated Kitchen Environments. CHI Extended Abstracts 2024: 290:1-290:9 - Camille Dupré, Caroline Appert, Stéphanie Rey, Houssem Saidi, Emmanuel Pietriga:
TriPad: Touch Input in AR on Ordinary Surfaces with Hand Tracking Only. CHI 2024: 754:1-754:18 - Mihail Terenti, Maria Casado-Palacios, Monica Gori, Radu-Daniel Vatavu:
What Is the User Experience of Eyes-Free Touch Input with Vibrotactile Feedback Decoupled from the Touchscreen? CHI Extended Abstracts 2024: 372:1-372:8 - Mihail Terenti, Matthieu Rupin, Baptiste Reynal, Laurent Grisoni, Radu-Daniel Vatavu:
The Eclectic User Experience of Combined On-Screen and On-Wrist Vibrotactile Feedback in Touchscreen Input. CHI Extended Abstracts 2024: 315:1-315:7 - Nina Tran, Paige S. DeVries, Matthew Seita, Raja S. Kushalnagar, Abraham Glasser, Christian Vogler:
Assessment of Sign Language-Based versus Touch-Based Input for Deaf Users Interacting with Intelligent Personal Assistants. CHI 2024: 53:1-53:15 - Jean Vanderdonckt, Radu-Daniel Vatavu, Arthur Sluÿters:
Engineering Touchscreen Input for 3-Way Displays: Taxonomy, Datasets, and Classification. EICS (Companion) 2024: 57-65 - Nina Tran, Paige S. DeVries, Matthew Seita, Raja S. Kushalnagar, Abraham Glasser, Christian Vogler:
Assessment of Sign Language-Based versus Touch-Based Input for Deaf Users Interacting with Intelligent Personal Assistants. CoRR abs/2404.14605 (2024) - Paige S. DeVries, Nina Tran, Keith Delk, Melanie Miga, Richard Carlisle Taulbee, Pranav Pidathala, Abraham Glasser, Raja Kushlanagar, Christian Vogler:
Sign Language-Based versus Touch-Based Input for Deaf Users with Interactive Personal Assistants in Simulated Kitchen Environments. CoRR abs/2404.14610 (2024) - 2023
- Alana Grant, Vilma Kankaanpää, Ilyena Hirskyj-Douglas:
Hum-ble Beginnings: Developing Touch- and Proximity-Input-Based Interfaces for Zoo-Housed Giraffes' Audio Enrichment. Proc. ACM Hum. Comput. Interact. 7(ISS): 175-197 (2023) - Eva Mackamul, Géry Casiez, Sylvain Malacria:
Exploring Visual Signifier Characteristics to Improve the Perception of Affordances of In-Place Touch Inputs. Proc. ACM Hum. Comput. Interact. 7(MHCI): 1-32 (2023) - Mohamed Kari, Christian Holz:
HandyCast: Phone-based Bimanual Input for Virtual Reality in Mobile and Space-Constrained Settings via Pose-and-Touch Transfer. CHI 2023: 528:1-528:15 - Peter Khoa Duc Tran, Purna Valli Anusha Gadepalli, Jaeyeon Lee, Aditya Shekhar Nittala:
Augmenting On-Body Touch Input with Tactile Feedback Through Fingernail Haptics. CHI 2023: 79:1-79:13 - Johannes Hartwig, Pascal Ruppert, Dominik Henrich:
Input and Editing of Force Profiles of In-Contact Robot Motions via a Touch Graphical User Interface. IRC 2023: 182-189 - Koki Iguma, Kazuya Murao, Hiroki Watanabe:
Input Interface with Touch and Non-touch Interactions using Atmospheric Pressure for Hearable Devices. ISWC 2023: 1-5 - Panos Markopoulos, Alireza Khanshan, Sven Bormans, Gabriella Tisza, Ling Kang, Pieter Van Gorp:
Comparative Evaluation of Touch-Based Input Techniques for Experience Sampling on Smartwatches. MUM 2023: 68-80 - Kieran Waugh, Mark McGill, Euan Freeman:
Demonstrating Proxemic Cursor Input for Touchless Displays. SUI 2023: 37:1-37:2 - Kaori Ikematsu, Kunihiro Kato:
ShiftTouch: Extending Touchscreens with Passive Interfaces using Small Occluded Area for Discrete Touch Input. TEI 2023: 11:1-11:15 - Tatsuya Kawasaki, Hiroyuki Manabe:
LensTouch: Touch Input on Lens Surfaces of Smart Glasses. UIST (Adjunct Volume) 2023: 56:1-56:3 - Kyunghwan Kim, Geehyuk Lee:
Virtual Rolling Temple: Expanding the Vertical Input Space of a Smart Glasses Touchpad. UIST (Adjunct Volume) 2023: 68:1-68:3 - 2022
- Mohamed Khamis, Karola Marky, Andreas Bulling, Florian Alt:
User-centred multimodal authentication: securing handheld mobile devices using gaze and touch input. Behav. Inf. Technol. 41(10): 2061-2083 (2022) - Fengyi Fang, Hongwei Zhang, Lishuang Zhan, Shihui Guo, Minying Zhang, Juncong Lin, Yipeng Qin, Hongbo Fu:
Handwriting Velcro: Endowing AR Glasses with Personalized and Posture-adaptive Text Input Using Flexible Touch Sensor. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6(4): 163:1-163:31 (2022) - Juxiao Zhang, Xiaoqin Zeng:
Multi-touch gesture recognition of Braille input based on Petri Net and RBF Net. Multim. Tools Appl. 81(14): 19395-19413 (2022) - Yuhan Luo, Bongshin Lee, Young-Ho Kim, Eun Kyoung Choe:
NoteWordy: Investigating Touch and Speech Input on Smartphones for Personal Data Capture. Proc. ACM Hum. Comput. Interact. 6(ISS): 568-591 (2022) - Mihail Terenti, Radu-Daniel Vatavu:
Measuring the User Experience of Vibrotactile Feedback on the Finger, Wrist, and Forearm for Touch Input on Large Displays. CHI Extended Abstracts 2022: 286:1-286:7 - Wen-Chin Li, Yung-Hsiang Liang, Wojciech Tomasz Korek, John J. H. Lin:
Assessments on Human-Computer Interaction Using Touchscreen as Control Inputs in Flight Operations. HCI (6) 2022: 326-338
skipping 304 more matches
loading more results
failed to load more results, please try again later
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
retrieved on 2024-09-25 23:41 CEST from data curated by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint