Остановите войну!
for scientists:
default search action
Search dblp
Full-text search
- > Home
Please enter a search query
- case-insensitive prefix search: default
e.g., sig matches "SIGIR" as well as "signal" - exact word search: append dollar sign ($) to word
e.g., graph$ matches "graph", but not "graphics" - boolean and: separate words by space
e.g., codd model - boolean or: connect words by pipe symbol (|)
e.g., graph|network
Update May 7, 2017: Please note that we had to disable the phrase search operator (.) and the boolean not operator (-) due to technical problems. For the time being, phrase search queries will yield regular prefix search result, and search terms preceded by a minus will be interpreted as regular (positive) search terms.
Author search results
no matches
Venue search results
no matches
Refine list
refine by author
- no options
- temporarily not available
refine by venue
- no options
- temporarily not available
refine by type
- no options
- temporarily not available
refine by access
- no options
- temporarily not available
refine by year
- no options
- temporarily not available
Publication search results
found 1,768 matches
- 2024
- Tobias Rieger, Dietrich Manzey:
Understanding the Impact of Time Pressure and Automation Support in a Visual Search Task. Hum. Factors 66(3): 770-786 (2024) - Meirav Taieb-Maimon, Eden Ya'akobi, Nevo Itzhak, Yossi Zaltsman:
Comparing Visual Encodings for the Task of Anomaly Detection. Int. J. Hum. Comput. Interact. 40(2): 357-375 (2024) - Alessandro Suglia, Ioannis Konstas, Oliver Lemon:
Visually Grounded Language Learning: A Review of Language Games, Datasets, Tasks, and Models. J. Artif. Intell. Res. 79: 173-239 (2024) - K. Saranya, Murugesa Pandiyan Paulraj, C. R. Hema, S. Nithya:
Fractal based feature extraction technique for classifying EEG signal for color visualization tasks. J. Intell. Fuzzy Syst. 46(2): 4315-4324 (2024) - Shiri Bar-Or, Thomas J. Baumgarten, Biyu J. He:
Neural Mechanisms Determining the Duration of Task-free, Self-paced Visual Perception. J. Cogn. Neurosci. 36(5): 756-775 (2024) - Cihan Acar, Kuluhan Binici, Alp Tekirdag, Yan Wu:
Visual-Policy Learning Through Multi-Camera View to Single-Camera View Knowledge Distillation for Robot Manipulation Tasks. IEEE Robotics Autom. Lett. 9(1): 691-698 (2024) - Piaopiao Jin, Bidan Huang, Wang Wei Lee, Tiefeng Li, Wei Yang:
Visual-Force-Tactile Fusion for Gentle Intricate Insertion Tasks. IEEE Robotics Autom. Lett. 9(5): 4830-4837 (2024) - Mirko Nava, Nicholas Carlotti, Luca Crupi, Daniele Palossi, Alessandro Giusti:
Self-Supervised Learning of Visual Robot Localization Using LED State Prediction as a Pretext Task. IEEE Robotics Autom. Lett. 9(4): 3363-3370 (2024) - Yun Wu, Zhongshi Zhang, Yao Zhang, Bin Zheng, Farzad Aghazadeh:
Pupil Response in Visual Tracking Tasks: The Impacts of Task Load, Familiarity, and Gaze Position. Sensors 24(8): 2545 (2024) - Jingdong Zhao, Zhaomin Wang, Liangliang Zhao, Hong Liu:
A Learning-Based Two-Stage Method for Submillimeter Insertion Tasks With Only Visual Inputs. IEEE Trans. Ind. Electron. 71(7): 7381-7390 (2024) - Guangzheng Zhang, Shuting Wang, Yuanlong Xie, Sheng Quan Xie, Yiming Hu, Tifan Xiong:
A Task-Oriented Grasping Framework Guided by Visual Semantics for Mobile Manipulators. IEEE Trans. Instrum. Meas. 73: 1-13 (2024) - Chao Liu, Edric John Cruz Nacpil, Wenbin Hou, Yaling Qin, Rencheng Zheng:
Evaluation of Visual Risk Perception of Automated Driving Tasks by Analyzing Gaze Pattern Dispersion. IEEE Trans. Intell. Veh. 9(1): 775-786 (2024) - Sonia Castelo, João Rulff, Erin McGowan, Bea Steers, Guande Wu, Shaoyu Chen, Irán R. Román, Roque Lopez, Ethan Brewer, Chen Zhao, Jing Qian, Kyunghyun Cho, He He, Qi Sun, Huy T. Vo, Juan Pablo Bello, Michael Krone, Cláudio T. Silva:
: Visualization of AI-Assisted Task Guidance in AR. IEEE Trans. Vis. Comput. Graph. 30(1): 1313-1323 (2024) - Weiwei Gu, Anant Sah, Nakul Gopalan:
Interactive Visual Task Learning for Robots. AAAI 2024: 10297-10305 - Weiwei Gu, Anant Sah, Nakul Gopalan:
Interactive Visual Task Learning for Robots. AAAI 2024: 23793-23795 - Ruiqian Nai, Zixin Wen, Ji Li, Yuanzhi Li, Yang Gao:
Revisiting Disentanglement in Downstream Tasks: A Study on Its Necessity for Abstract Visual Reasoning. AAAI 2024: 14405-14413 - Theresa Prinz, Klaus Bengler:
A Human-Centered Evaluation of Visualization Techniques for Teleoperated Assembly Tasks for Non-Expert Users. HRI (Companion) 2024: 842-846 - Alvitta Ottley:
The Dance of Logic and Unpredictability: Examining the Predictability of User Behavior on Visual Analytics Tasks. VISIGRAPP 2024: 11-20 - Stefano Stradiotti, Nicolas Emiliani, Emanuela Marcelli, Laura Cercenelli:
Understanding How Different Visual Aids for Augmented Reality Influence Tool-Patient Alignment in Surgical Tasks: A Preliminary Study. VISIGRAPP (1): GRAPP, HUCAPP, IVAPP 2024: 616-622 - Nelusa Pathmanathan, Tobias Rau, Xiliu Yang, Aimée Sousa Calepso, Felix Amtsberg, Achim Menges, Michael Sedlmair, Kuno Kurzhals:
Eyes on the Task: Gaze Analysis of Situated Visualization for Collaborative Tasks. VR 2024: 785-795 - Haonan Guo, Xin Su, Chen Wu, Bo Du, Liangpei Zhang, Deren Li:
Remote Sensing ChatGPT: Solving Remote Sensing Tasks with ChatGPT and Visual Models. CoRR abs/2401.09083 (2024) - Amir M. Mansourian, Vladimir Somers, Christophe De Vleeschouwer, Shohreh Kasaei:
Multi-task Learning for Joint Re-identification, Team Affiliation, and Role Classification for Sports Visual Tracking. CoRR abs/2401.09942 (2024) - Julie Tores, Lucile Sassatelli, Hui-Yin Wu, Clement Bergman, Lea Andolfi, Victor Ecrement, Frédéric Precioso, Thierry Devars, Magali Guaresi, Virginie Julliard, Sarah Lecossais:
Visual Objectification in Films: Towards a New AI Task for Video Interpretation. CoRR abs/2401.13296 (2024) - Jing Yu Koh, Robert Lo, Lawrence Jang, Vikram Duvvur, Ming Chong Lim, Po-Yu Huang, Graham Neubig, Shuyan Zhou, Ruslan Salakhutdinov, Daniel Fried:
VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks. CoRR abs/2401.13649 (2024) - Tianhe Ren, Shilong Liu, Ailing Zeng, Jing Lin, Kunchang Li, He Cao, Jiayu Chen, Xinyu Huang, Yukang Chen, Feng Yan, Zhaoyang Zeng, Hao Zhang, Feng Li, Jie Yang, Hongyang Li, Qing Jiang, Lei Zhang:
Grounded SAM: Assembling Open-World Models for Diverse Visual Tasks. CoRR abs/2401.14159 (2024) - Pierre Marza, Laëtitia Matignon, Olivier Simonin, Christian Wolf:
Task-conditioned adaptation of visual features in multi-task policy learning. CoRR abs/2402.07739 (2024) - Mirko Nava, Nicholas Carlotti, Luca Crupi, Daniele Palossi, Alessandro Giusti:
Self-Supervised Learning of Visual Robot Localization Using LED State Prediction as a Pretext Task. CoRR abs/2402.09886 (2024) - Zhiyang Xu, Chao Feng, Rulin Shao, Trevor Ashby, Ying Shen, Di Jin, Yu Cheng, Qifan Wang, Lifu Huang:
Vision-Flan: Scaling Human-Labeled Tasks in Visual Instruction Tuning. CoRR abs/2402.11690 (2024) - Moritz Lange, Raphael C. Engelhardt, Wolfgang Konen, Laurenz Wiskott:
Interpretable Brain-Inspired Representations Improve RL Performance on Visual Navigation Tasks. CoRR abs/2402.12067 (2024) - Truong Thanh Hung Nguyen, Tobias Clement, Phuc Truong Loc Nguyen, Nils Kemmerzell, Van Binh Truong, Vo Thanh Khang Nguyen, Mohamed Abdelaal, Hung Cao:
LangXAI: Integrating Large Vision Models for Generating Textual Explanations to Enhance Explainability in Visual Perception Tasks. CoRR abs/2402.12525 (2024)
skipping 1,738 more matches
loading more results
failed to load more results, please try again later
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
retrieved on 2024-05-07 02:51 CEST from data curated by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint