default search action
Search dblp
Full-text search
- > Home
Please enter a search query
- case-insensitive prefix search: default
e.g., sig matches "SIGIR" as well as "signal" - exact word search: append dollar sign ($) to word
e.g., graph$ matches "graph", but not "graphics" - boolean and: separate words by space
e.g., codd model - boolean or: connect words by pipe symbol (|)
e.g., graph|network
Update May 7, 2017: Please note that we had to disable the phrase search operator (.) and the boolean not operator (-) due to technical problems. For the time being, phrase search queries will yield regular prefix search result, and search terms preceded by a minus will be interpreted as regular (positive) search terms.
Author search results
no matches
Venue search results
no matches
Refine list
refine by author
- no options
- temporarily not available
refine by venue
- no options
- temporarily not available
refine by type
- no options
- temporarily not available
refine by access
- no options
- temporarily not available
refine by year
- no options
- temporarily not available
Publication search results
found 440 matches
- 2024
- Béatrice Biancardi, Maurizio Mancini, Brian Ravenet, Giovanna Varni:
Modelling the "transactive memory system" in multimodal multiparty interactions. J. Multimodal User Interfaces 18(1): 103-117 (2024) - Claudia Buchner, Johannes Kraus, Linda Miller, Martin Baumann:
What is good? Exploring the applicability of a one item measure as a proxy for measuring acceptance in driver-vehicle interaction studies. J. Multimodal User Interfaces 18(2): 195-208 (2024) - Marwa Chacha, Prosper Nyaki, Ariane Cuenen, Ansar Yasar, Geert Wets:
Truck drivers' views on the road safety benefits of advanced driver assistance systems and Intelligent Transport Systems in Tanzania. J. Multimodal User Interfaces 18(2): 229-237 (2024) - Martin Dobiasch, Stefan Oppl, Michael Stöckl, Arnold Baca:
Pegasos: a framework for the creation of direct mobile coaching feedback systems. J. Multimodal User Interfaces 18(1): 1-19 (2024) - Dan García-Carrillo, Roberto García, Xabiel G. Pañeda, Filipa Mourão, David Melendi, Víctor Corcoba Magaña, Sara Paiva:
Testing driver warning systems for off-road industrial vehicles using a cyber-physical simulator. J. Multimodal User Interfaces 18(2): 179-194 (2024) - Pär Gustavsson, Mikael Ljung Aust:
In-vehicle nudging for increased Adaptive Cruise Control use: a field study. J. Multimodal User Interfaces 18(2): 257-271 (2024) - Weitao Jiang, Bingxin Zhang, Ruiqi Sun, Dong Zhang, Shan Hu:
A study on the attention of people with low vision to accessibility guidance signs. J. Multimodal User Interfaces 18(1): 87-101 (2024) - Xuan Liu, Jiachen Ma, Qiang Wang:
A social robot as your reading companion: exploring the relationships between gaze patterns and knowledge gains. J. Multimodal User Interfaces 18(1): 21-41 (2024) - Zahra J. Muhsin, Rami Qahwaji, Faruque Ghanchi, Majid A. Al-Taee:
Review of substitutive assistive tools and technologies for people with visual impairments: recent advancements and prospects. J. Multimodal User Interfaces 18(1): 135-156 (2024) - Martha Papadogianni, Mehmet Ercan Altinsoy, Areti Andreopoulou:
Multimodal exploration in elementary music classroom. J. Multimodal User Interfaces 18(1): 55-68 (2024) - Ankit R. Patel, Philipp Wintersberger, Dustin J. Souders, Tiziana C. Callari, Tanja Stoll:
Special issue on "User-centered advanced driver assistance systems (UCADAS)". J. Multimodal User Interfaces 18(2): 157-158 (2024) - Subin Raj, L. R. D. Murthy, Thanikai Adhithiyan Shanmugam, Gyanig Kumar, Amaresh Chakrabarti, Pradipta Biswas:
Augmented reality and deep learning based system for assisting assembly process. J. Multimodal User Interfaces 18(1): 119-133 (2024) - Suprakas Saren, Abhishek Mukhopadhyay, Debasish Ghose, Pradipta Biswas:
Comparing alternative modalities in the context of multimodal human-robot interaction. J. Multimodal User Interfaces 18(1): 69-85 (2024) - Martina Schuß, Luca Pizzoni, Andreas Riener:
Human or robot? Exploring different avatar appearances to increase perceived security in shared automated vehicles. J. Multimodal User Interfaces 18(2): 209-228 (2024) - Dungar Singh, Pritikana Das, Indrajit Ghosh:
Prediction of pedestrian crossing behaviour at unsignalized intersections using machine learning algorithms: analysis and comparison. J. Multimodal User Interfaces 18(2): 239-256 (2024) - Moustafa Tabbarah, Yusheng Cao, Ziming Fang, Lingyu Li, Myounghoon Jeon:
Sonically-enhanced in-vehicle air gesture interactions: evaluation of different spearcon compression rates. J. Multimodal User Interfaces 18(2): 159-177 (2024) - Luca Turchet, Simone Luiten, Tjebbe Treub, Marloes van der Burgt, Costanza Siani, Alberto Boem:
Hearing loss prevention at loud music events via real-time visuo-haptic feedback. J. Multimodal User Interfaces 18(1): 43-53 (2024) - 2023
- Ali Abdulrazzaq Alsamarei, Bahar Sener:
Remote social touch framework: a way to communicate physical interactions across long distances. J. Multimodal User Interfaces 17(2): 79-104 (2023) - Haram Choi, Joung-Huem Kwon, Sanghun Nam:
Research on the application of gaze visualization interface on virtual reality training systems. J. Multimodal User Interfaces 17(3): 203-211 (2023) - Sophie Dewil, Shterna Kuptchik, Mingxiao Liu, Sean Sanford, Troy Bradbury, Elena Davis, Amanda Clemente, Raviraj Nataraj:
The cognitive basis for virtual reality rehabilitation of upper-extremity motor function after neurotraumas. J. Multimodal User Interfaces 17(3): 105-120 (2023) - Elias Elmquist, Alexander Bock, Jonas Lundberg, Anders Ynnerman, Niklas Rönnberg:
SonAir: the design of a sonification of radar data for air traffic control. J. Multimodal User Interfaces 17(3): 137-149 (2023) - Joe Fitzpatrick, Flaithrí Neff:
Perceptually congruent sonification of auditory line charts. J. Multimodal User Interfaces 17(4): 285-300 (2023) - Miao Huang, Chien-Hsiung Chen:
The effects of olfactory cues as Interface notifications on a mobile phone. J. Multimodal User Interfaces 17(1): 21-32 (2023) - Adrian Benigno Latupeirissa, Roberto Bresin:
PepperOSC: enabling interactive sonification of a robot's expressive movement. J. Multimodal User Interfaces 17(4): 231-239 (2023) - Adrian Benigno Latupeirissa, Roberto Bresin:
Correction to: PepperOSC: enabling interactive sonification of a robot's expressive movement. J. Multimodal User Interfaces 17(4): 241 (2023) - Simon Linke, Rolf Bader, Robert Mores:
Model-based sonification based on the impulse pattern formulation. J. Multimodal User Interfaces 17(4): 243-251 (2023) - Candy Olivia Mawalim, Shogo Okada, Yukiko I. Nakano, Masashi Unoki:
Personality trait estimation in group discussions using multimodal analysis and speaker embedding. J. Multimodal User Interfaces 17(2): 47-63 (2023) - Guoxuan Ning, Brianna Grant, Bill Kapralos, Alvaro J. Uribe-Quevedo, K. C. Collins, Kamen Kanev, Adam Dubrowski:
Understanding virtual drilling perception using sound, and kinesthetic cues obtained with a mouse and keyboard. J. Multimodal User Interfaces 17(3): 151-163 (2023) - Guoxuan Ning, Brianna Grant, Bill Kapralos, Alvaro Uribe-Quevedo, K. C. Collins, Kamen Kanev, Adam Dubrowski:
Correction to: Understanding virtual drilling perception using sound, and kinesthetic cues obtained with a mouse and keyboard. J. Multimodal User Interfaces 17(3): 165 (2023) - Lucas El Raghibi, Ange Pascal Muhoza, Jeanne Evrard, Hugo Ghazi, Grégoire van Oldeneel tot Oldenzeel, Victorien Sonneville, Benoît Macq, Renaud Ronsse:
Virtual reality can mediate the learning phase of upper limb prostheses supporting a better-informed selection process. J. Multimodal User Interfaces 17(1): 33-46 (2023)
skipping 410 more matches
loading more results
failed to load more results, please try again later
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
retrieved on 2024-10-05 09:42 CEST from data curated by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint