- 2019
- Visually Grounded Interaction and Language (ViGIL), NeurIPS 2019 Workshop, Vancouver, Canada, December 13, 2019. 2019 [contents]
- Catalina Cangea, Eugene Belilovsky, Pietro Liò, Aaron C. Courville:
VideoNavQA: Bridging the Gap between Visual and Embodied Question Answering. ViGIL@NeurIPS 2019 - Guan-Lin Chao, Abhinav Rastogi, Semih Yavuz, Dilek Hakkani-Tür, Jindong Chen, Ian R. Lane:
Learning Question-Guided Video Representation for Multi-Turn Video Question Answering. ViGIL@NeurIPS 2019 - Geoffrey Cideron, Mathieu Seurin, Florian Strub, Olivier Pietquin:
Self-Educated Language Agent with Hindsight Experience Replay for Instruction Following. ViGIL@NeurIPS 2019 - Shabnam Daghaghi, Anshumali Shrivastava, Tharun Medini:
Cross-Modal Mapping for Generalized Zero-Shot Learning by Soft-Labeling. ViGIL@NeurIPS 2019 - Jean-Benoit Delbrouck:
Modulated Self-attention Convolutional Network for VQA. ViGIL@NeurIPS 2019 - Jean-Benoit Delbrouck:
Can adversarial training learn image captioning ? ViGIL@NeurIPS 2019 - Tsu-Jui Fu, Yuta Tsuboi, Sosuke Kobayashi, Yuta Kikuchi:
Learning from Observation-Only Demonstration for Task-Oriented Language Grounding via Self-Examination. ViGIL@NeurIPS 2019 - Chihiro Fujiyama, Ichiro Kobayashi:
A Comprehensive Analysis of Semantic Compositionality in Text-to-Image Generation. ViGIL@NeurIPS 2019 - Alba Maria Hererra-Palacio, Carles Ventura, Carina Silberer, Ionut-Teodor Sorodoc, Gemma Boleda, Xavier Giró-i-Nieto:
Recurrent Instance Segmentation using Sequences of Referring Expressions. ViGIL@NeurIPS 2019 - Gabriel Ilharco, Vihan Jain, Alexander Ku, Eugene Ie, Jason Baldridge:
General Evaluation for Instruction Conditioned Navigation using Dynamic Time Warping. ViGIL@NeurIPS 2019 - T. S. Jayram, Vincent Albouy, Tomasz Kornuta, Emre Sevgen, Ahmet S. Ozcan:
Visually Grounded Video Reasoning in Selective Attention Memory. ViGIL@NeurIPS 2019 - Douwe Kiela, Suvrat Bhooshan, Hamed Firooz, Davide Testuggine:
Supervised Multimodal Bitransformers for Classifying Images and Text. ViGIL@NeurIPS 2019 - Olga Kovaleva, Chaitanya Shivade, Satyananda Kashyap, Karina Kanjaria, Adam Coy, Deddeh Ballah, Yufan Guo, Joy T. Wu, Alexandros Karargyris, David Beymer, Anna Rumshisky, Vandana V. Mukherjee:
Visual Dialog for Radiology: Data Curation and FirstSteps. ViGIL@NeurIPS 2019 - Nikhil Krishnaswamy, James Pustejovsky:
Situated Grounding Facilitates Multimodal Concept Learning for AI. ViGIL@NeurIPS 2019 - Alexander Kuhnle, Ann A. Copestake:
What is needed for simple spatial language capabilities in VQA? ViGIL@NeurIPS 2019 - Shachi H. Kumar, Eda Okur, Saurav Sahay, Jonathan Huang, Lama Nachman:
Leveraging Topics and Audio Features with Multimodal Attention for Audio Visual Scene-Aware Dialog. ViGIL@NeurIPS 2019 - Yen-Ling Kuo, Boris Katz, Andrei Barbu:
Deep compositional robotic planners that follow natural language commands. ViGIL@NeurIPS 2019 - Farley Lai, Ning Xie, Derek Doran, Asim Kadav:
Contextual Grounding of Natural Language Entities in Images. ViGIL@NeurIPS 2019 - Nicolas Lair, Cédric Colas, Rémy Portelas, Jean-Michel Dussoux, Peter F. Dominey, Pierre-Yves Oudeyer:
Language Grounding through Social Interactions and Curiosity-Driven Multi-Goal Learning. ViGIL@NeurIPS 2019 - Angeliki Lazaridou, Anna Potapenko, Olivier Tieleman:
Structural and functional learning for learning language use. ViGIL@NeurIPS 2019 - Jingxiang Lin, Unnat Jain, Alexander G. Schwing:
A Simple Baseline for Visual Commonsense Reasoning. ViGIL@NeurIPS 2019 - Yassine Mrabet, Dina Demner-Fushman:
On Agreements in Visual Understanding. ViGIL@NeurIPS 2019 - Jesse Mu, Percy Liang, Noah D. Goodman:
Shaping Visual Representations with Language for Few-shot Classification. ViGIL@NeurIPS 2019 - Khanh Nguyen, Hal Daumé III:
Help, Anna! Visual Navigation with Natural Multimodal Assistance via Retrospective Curiosity-Encouraging Imitation Learning. ViGIL@NeurIPS 2019 - Candace Ross, Cheahuychou Mao, Boris Katz, Andrei Barbu:
Learning Language from Vision. ViGIL@NeurIPS 2019 - Homagni Saha, Vijay Venkataraman, Alberto Speranzon, Soumik Sarkar:
A perspective on multi-agent communication for information fusion. ViGIL@NeurIPS 2019 - Vasu Sharma, Ankita Kalra, Louis-Philippe Morency:
Induced Attention Invariance: Defending VQA Models against Adversarial Attacks. ViGIL@NeurIPS 2019 - Sanjay Subramanian, Sameer Singh, Matt Gardner:
Analyzing Compositionality in Visual Question Answering. ViGIL@NeurIPS 2019 - Thomas M. Sutter, Imant Daunhawer, Julia E. Vogt:
Multimodal Generative Learning Utilizing Jensen-Shannon-Divergence. ViGIL@NeurIPS 2019