Остановите войну!
for scientists:
default search action
Search dblp
Full-text search
- > Home
Please enter a search query
- case-insensitive prefix search: default
e.g., sig matches "SIGIR" as well as "signal" - exact word search: append dollar sign ($) to word
e.g., graph$ matches "graph", but not "graphics" - boolean and: separate words by space
e.g., codd model - boolean or: connect words by pipe symbol (|)
e.g., graph|network
Update May 7, 2017: Please note that we had to disable the phrase search operator (.) and the boolean not operator (-) due to technical problems. For the time being, phrase search queries will yield regular prefix search result, and search terms preceded by a minus will be interpreted as regular (positive) search terms.
Author search results
no matches
Venue search results
no matches
Refine list
refine by author
- no options
- temporarily not available
refine by venue
- no options
- temporarily not available
refine by type
- no options
- temporarily not available
refine by access
- no options
- temporarily not available
refine by year
- no options
- temporarily not available
Publication search results
found 48 matches
- 2005
- Meghan Allen, Jennifer Gluck, Karon E. MacLean, Erwin Tang:
An initial usability assessment for symbolic haptic rendering of music parameters. ICMI 2005: 244-251 - Lynne Baillie, Raimund Schatz:
Exploring multimodality in the laboratory and the field. ICMI 2005: 100-107 - Koray Balci:
XfaceEd: authoring tool for embodied conversational agents. ICMI 2005: 208-213 - Melanie Baljko:
The contrastive evaluation of unimodal and multimodal interfaces for voice otput communication aids. ICMI 2005: 301-308 - Paulo Barthelmess, Edward C. Kaiser, Xiao Huang, David Demirdjian:
Distributed pointing for multimodal collaboration over sketched diagrams. ICMI 2005: 10-17 - Alberto Battocchi, Fabio Pianesi, Dina Goren-Bar:
A first evaluation study of a database of kinetic facial expressions (DaFEx). ICMI 2005: 214-221 - Silvia Berti, Fabio Paternò:
Migratory MultiModal interfaces in MultiDevice environments. ICMI 2005: 92-99 - Oliver Brdiczka, Jérôme Maisonnasse, Patrick Reignier:
Automatic detection of interaction groups. ICMI 2005: 32-36 - Fang Chen, Eric H. C. Choi, Julien Epps, Serge Lichman, Natalie Ruiz, Yu (David) Shi, Ronnie Taib, Mike Wu:
A study of manual gesture-based selection for the PEMMI multimodal transport management interface. ICMI 2005: 274-281 - Maria Danninger, G. Flaherty, Keni Bernardin, Hazim Kemal Ekenel, Thilo Köhler, Robert G. Malkin, Rainer Stiefelhagen, Alex Waibel:
The connector: facilitating context-aware communication. ICMI 2005: 69-75 - Marc O. Ernst:
The "puzzle" of sensory perception: putting together multisensory information. ICMI 2005: 1 - Daniel Gatica-Perez, Guillaume Lathoud, Jean-Marc Odobez, Iain McCowan:
Multimodal multispeaker probabilistic tracking in meetings. ICMI 2005: 183-190 - Umberto Giraudo, Monica Bordegoni:
Using observations of real designers at work to inform the development of a novel haptic modeling system. ICMI 2005: 230-235 - Peter Gorniak, Deb Roy:
Probabilistic grounding of situated speech using plan recognition and reference resolution. ICMI 2005: 138-143 - Marc Hanheide, Christian Bauckhage, Gerhard Sagerer:
Combining environmental cues & head gestures to interact with wearable devices. ICMI 2005: 25-31 - Jose L. Hernandez-Rebollar:
Gesture-driven American sign language phraselator. ICMI 2005: 288-292 - Md. Altab Hossain, Rahmadi Kurnia, Akio Nakamura, Yoshinori Kuno:
Interactive vision to detect target objects for helper robots. ICMI 2005: 293-300 - Giancarlo Iannizzotto, Carlo Costanzo, Francesco La Rosa, Pietro Lanzafame:
A multimodal perceptual user interface for video-surveillance environments. ICMI 2005: 45-52 - Hiroshi Ishiguro:
Interactive humanoids and androids as ideal interfaces for humans. ICMI 2005: 137 - Marc Erich Latoschik:
A user interface framework for multimodal VR interactions. ICMI 2005: 76-83 - Bee-Wah Lee, Alvin W. Yeo:
Integrating sketch and speech inputs using spatial information. ICMI 2005: 2-9 - Shuyin Li, Axel Haasch, Britta Wrede, Jannik Fritsch, Gerhard Sagerer:
Human-style interaction with a robot for cooperative learning of scene objects. ICMI 2005: 151-158 - Rebecca Lunsford, Sharon L. Oviatt, Rachel Coulston:
Audio-visual cues distinguishing self- from system-directed speech in younger and older adults. ICMI 2005: 167-174 - Louis-Philippe Morency, Candace L. Sidner, Christopher Lee, Trevor Darrell:
Contextual recognition of head gestures. ICMI 2005: 18-24 - Tomoyuki Morita, Yasushi Hirano, Yasuyuki Sumi, Shoji Kajita, Kenji Mase:
A pattern mining method for interpretation of interaction. ICMI 2005: 267-273 - Kai Nickel, Tobias Gehrig, Rainer Stiefelhagen, John W. McDonough:
A joint particle filter for audio-visual speaker tracking. ICMI 2005: 61-68 - Elena Not, Koray Balci, Fabio Pianesi, Massimo Zancanaro:
Synthetic characters as multichannel interfaces. ICMI 2005: 200-207 - Kazuhiro Otsuka, Yoshinao Takemae, Junji Yamato:
A probabilistic inference of multiparty-conversation structure based on Markov-switching models of gaze patterns, head directions, and utterances. ICMI 2005: 191-198 - Jiazhi Ou, Lui Min Oh, Susan R. Fussell, Tal Blum, Jie Yang:
Analyzing and predicting focus of attention in remote collaborative tasks. ICMI 2005: 116-123 - Alex Pentland:
Socially aware computation and communication. ICMI 2005: 199
skipping 18 more matches
loading more results
failed to load more results, please try again later
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
retrieved on 2024-05-11 09:41 CEST from data curated by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint