Остановите войну!
for scientists:
default search action
Search dblp
Full-text search
- > Home
Please enter a search query
- case-insensitive prefix search: default
e.g., sig matches "SIGIR" as well as "signal" - exact word search: append dollar sign ($) to word
e.g., graph$ matches "graph", but not "graphics" - boolean and: separate words by space
e.g., codd model - boolean or: connect words by pipe symbol (|)
e.g., graph|network
Update May 7, 2017: Please note that we had to disable the phrase search operator (.) and the boolean not operator (-) due to technical problems. For the time being, phrase search queries will yield regular prefix search result, and search terms preceded by a minus will be interpreted as regular (positive) search terms.
Author search results
no matches
Venue search results
no matches
Refine list
refine by author
- no options
- temporarily not available
refine by venue
- no options
- temporarily not available
refine by type
- no options
- temporarily not available
refine by access
- no options
- temporarily not available
refine by year
- no options
- temporarily not available
Publication search results
found 70 matches
- 2004
- Pyush Agrawal, Ingmar Rauschert, Keerati Inochanon, Levent Bolelli, Sven Fuhrmann, Isaac Brewer, Guoray Cai, Alan M. MacEachren, Rajeev Sharma:
Multimodal interface platform for geographical information systems (GeoMIP) in crisis management. ICMI 2004: 339-340 - Paulo Barthelmess, Clarence A. Ellis:
The ThreadMill architecture for stream-oriented human communication analysis applications. ICMI 2004: 61-68 - Rémi Bastide, David Navarre, Philippe A. Palanque, Amélie Schyn, Pierre Dragicevic:
A model-based approach for real-time embedded multimodal systems in military aircrafts. ICMI 2004: 243-250 - Emily Bennett:
Projection augmented models: the effect of haptic feedback on subjective and objective human factors. ICMI 2004: 347 - Niels Ole Bernsen, Laila Dybkjær:
Evaluation of spoken multimodal conversation. ICMI 2004: 38-45 - Péter Pál Boda:
A maximum entropy based approach for multimodal integration. ICMI 2004: 337-338 - Adam Bodnar, Richard Corbett, Dmitry Nekrasovski:
AROMA: ambient awareness through olfaction in a messaging application. ICMI 2004: 183-190 - Levent Bolelli:
Multimodal response generation in GIS. ICMI 2004: 355 - Levent Bolelli, Guoray Cai, Hongmei Wang, Bita Mortazavi, Ingmar Rauschert, Sven Fuhrmann, Rajeev Sharma, Alan M. MacEachren:
Multimodal interaction for distributed collaboration. ICMI 2004: 327-328 - Jullien Bouchet, Laurence Nigay, Thierry Ganille:
ICARE software components for rapidly developing multimodal interfaces. ICMI 2004: 251-258 - Laroussi Bouguila, Florian Evéquoz, Michèle Courant, Béat Hirsbrunner:
Walking-pad: a step-in-place locomotion interface for virtual environments. ICMI 2004: 77-81 - Carlos Busso, Zhigang Deng, Serdar Yildirim, Murtaza Bulut, Chul Min Lee, Abe Kazemzadeh, Sungbok Lee, Ulrich Neumann, Shrikanth S. Narayanan:
Analysis of emotion recognition using facial expressions, speech and multimodal information. ICMI 2004: 205-211 - Rajesh Chandrasekaran:
Using language structure for adaptive multimodal language acquisition. ICMI 2004: 345 - Songsak Channarukul:
Adaptations of multimodal content in dialog systems targeting heterogeneous devices. ICMI 2004: 341 - Songsak Channarukul, Susan Weber McRoy, Syed S. Ali:
MULTIFACE: multimodal content adaptations for heterogeneous devices. ICMI 2004: 319-320 - Lei Chen:
Utilizing gestures to better understand dynamic structure of human communication. ICMI 2004: 342 - Datong Chen, Robert G. Malkin, Jie Yang:
Multimodal detection of human interaction events in a nursing home environment. ICMI 2004: 82-89 - Joseph M. Dalton, Ali Ahmad, Kay M. Stanney:
Command and control resource performance predictor(C2RP2). ICMI 2004: 321-322 - David Demirdjian, Kevin W. Wilson, Michael Siracusa, Trevor Darrell:
Real-time audio-visual tracking for meeting analysis. ICMI 2004: 331-332 - Pierre Dragicevic, Jean-Daniel Fekete:
Support for input adaptability in the ICON toolkit. ICMI 2004: 212-219 - Jacob Eisenstein:
Gestural cues for speech understanding. ICMI 2004: 344 - Jacob Eisenstein, Randall Davis:
Visual and linguistic information in gesture classification. ICMI 2004: 113-120 - Myra P. van Esch-Bussemakers, Anita H. M. Cremers:
User walkthrough of multimodal access to multidimensional databases. ICMI 2004: 220-226 - Brian F. Goldiez, Glenn A. Martin, Jason Daly, Donald Washburn, Todd Lazarus:
Software infrastructure for multi-modal virtual environments. ICMI 2004: 303-308 - Sébastien Grange, Terrence Fong, Charles Baur:
M/ORIS: a medical/operating room interaction system. ICMI 2004: 159-166 - Curry I. Guinn, Robert C. Hubal:
An evaluation of virtual human technology in informational kiosks. ICMI 2004: 297-302 - Eric R. Hamilton:
Agent and library augmented shared knowledge areas (ALASKA). ICMI 2004: 317-318 - Mary P. Harper, Elizabeth Shriberg:
Multimodal model integration for sentence unit detection. ICMI 2004: 121-128 - Timothy J. Hazen, Kate Saenko, Chia-Hao La, James R. Glass:
A segment-based audio-visual speech recognizer: data collection, development, and initial experiments. ICMI 2004: 235-242 - Gunther Heidemann, Ingo Bax, Holger Bekel:
Multimodal interaction in an augmented reality scenario. ICMI 2004: 53-60
skipping 40 more matches
loading more results
failed to load more results, please try again later
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
retrieved on 2024-05-30 05:28 CEST from data curated by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint