default search action
Yash Kant
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c10]Yash Kant, Aliaksandr Siarohin, Ziyi Wu, Michael Vasilkovsky, Guocheng Qian, Jian Ren, Riza Alp Güler, Bernard Ghanem, Sergey Tulyakov, Igor Gilitschenski:
SPAD: Spatially Aware Multi-View Diffusers. CVPR 2024: 10026-10038 - [c9]Akash Karthikeyan, Robert Ren, Yash Kant, Igor Gilitschenski:
AvatarOne: Monocular 3D Human Animation. WACV 2024: 3635-3645 - [i14]Guocheng Qian, Junli Cao, Aliaksandr Siarohin, Yash Kant, Chaoyang Wang, Michael Vasilkovsky, Hsin-Ying Lee, Yuwei Fang, Ivan Skorokhodov, Peiye Zhuang, Igor Gilitschenski, Jian Ren, Bernard Ghanem, Kfir Aberman, Sergey Tulyakov:
AToM: Amortized Text-to-Mesh using 2D Diffusion. CoRR abs/2402.00867 (2024) - [i13]Yash Kant, Ziyi Wu, Michael Vasilkovsky, Guocheng Qian, Jian Ren, Riza Alp Güler, Bernard Ghanem, Sergey Tulyakov, Igor Gilitschenski, Aliaksandr Siarohin:
SPAD : Spatially Aware Multiview Diffusers. CoRR abs/2402.05235 (2024) - [i12]Zhenggang Tang, Peiye Zhuang, Chaoyang Wang, Aliaksandr Siarohin, Yash Kant, Alexander G. Schwing, Sergey Tulyakov, Hsin-Ying Lee:
Pixel-Aligned Multi-View Generation with Depth Guided Decoder. CoRR abs/2408.14016 (2024) - [i11]Derek Tam, Yash Kant, Brian Lester, Igor Gilitschenski, Colin Raffel:
Realistic Evaluation of Model Merging for Compositional Generalization. CoRR abs/2409.18314 (2024) - 2023
- [c8]Tianshu Kuai, Akash Karthikeyan, Yash Kant, Ashkan Mirzaei, Igor Gilitschenski:
CAMM: Building Category-Agnostic and Animatable 3D Models from Monocular Videos. CVPR Workshops 2023: 6587-6597 - [c7]Yash Kant, Aliaksandr Siarohin, Riza Alp Güler, Menglei Chai, Jian Ren, Sergey Tulyakov, Igor Gilitschenski:
Invertible Neural Skinning. CVPR 2023: 8715-8725 - [c6]Yash Kant, Aliaksandr Siarohin, Michael Vasilkovsky, Riza Alp Güler, Jian Ren, Sergey Tulyakov, Igor Gilitschenski:
Repurposing Diffusion Inpainters for Novel View Synthesis. SIGGRAPH Asia 2023: 16:1-16:12 - [i10]Aniket Agarwal, Alex Zhang, Karthik Narasimhan, Igor Gilitschenski, Vishvak Murahari, Yash Kant:
Building Scalable Video Understanding Benchmarks through Sports. CoRR abs/2301.06866 (2023) - [i9]Yash Kant, Aliaksandr Siarohin, Riza Alp Güler, Menglei Chai, Jian Ren, Sergey Tulyakov, Igor Gilitschenski:
Invertible Neural Skinning. CoRR abs/2302.09227 (2023) - [i8]Tianshu Kuai, Akash Karthikeyan, Yash Kant, Ashkan Mirzaei, Igor Gilitschenski:
CAMM: Building Category-Agnostic and Animatable 3D Models from Monocular Videos. CoRR abs/2304.06937 (2023) - [i7]Yash Kant, Aliaksandr Siarohin, Michael Vasilkovsky, Riza Alp Güler, Jian Ren, Sergey Tulyakov, Igor Gilitschenski:
iNVS: Repurposing Diffusion Inpainters for Novel View Synthesis. CoRR abs/2310.16167 (2023) - [i6]Yen-Chi Cheng, Chieh Hubert Lin, Chaoyang Wang, Yash Kant, Sergey Tulyakov, Alexander G. Schwing, Liangyan Gui, Hsin-Ying Lee:
Virtual Pets: Animatable Animal Generation in 3D Scenes. CoRR abs/2312.14154 (2023) - 2022
- [c5]Ashkan Mirzaei, Yash Kant, Jonathan Kelly, Igor Gilitschenski:
LaTeRF: Label and Text Driven Object Radiance Fields. ECCV (3) 2022: 20-36 - [c4]Yash Kant, Arun Ramachandran, Sriram Yenamandra, Igor Gilitschenski, Dhruv Batra, Andrew Szot, Harsh Agrawal:
Housekeep: Tidying Virtual Households Using Commonsense Reasoning. ECCV (39) 2022: 355-373 - [i5]Yash Kant, Arun Ramachandran, Sriram Yenamandra, Igor Gilitschenski, Dhruv Batra, Andrew Szot, Harsh Agrawal:
Housekeep: Tidying Virtual Households using Commonsense Reasoning. CoRR abs/2205.10712 (2022) - [i4]Ashkan Mirzaei, Yash Kant, Jonathan Kelly, Igor Gilitschenski:
LaTeRF: Label and Text Driven Object Radiance Fields. CoRR abs/2207.01583 (2022) - 2021
- [c3]Aditya Bodi, Pooyan Fazli, Shasta Ihorn, Yue-Ting Siu, Andrew T. Scott, Lothar Narins, Yash Kant, Abhishek Das, Ilmi Yoon:
Automated Video Description for Blind and Low Vision Users. CHI Extended Abstracts 2021: 230:1-230:7 - [c2]Yash Kant, Abhinav Moudgil, Dhruv Batra, Devi Parikh, Harsh Agrawal:
Contrast and Classify: Training Robust VQA Models. ICCV 2021: 1584-1593 - 2020
- [c1]Yash Kant, Dhruv Batra, Peter Anderson, Alexander G. Schwing, Devi Parikh, Jiasen Lu, Harsh Agrawal:
Spatially Aware Multimodal Transformers for TextVQA. ECCV (9) 2020: 715-732 - [i3]Yash Kant, Dhruv Batra, Peter Anderson, Alexander G. Schwing, Devi Parikh, Jiasen Lu, Harsh Agrawal:
Spatially Aware Multimodal Transformers for TextVQA. CoRR abs/2007.12146 (2020) - [i2]Yash Kant, Abhinav Moudgil, Dhruv Batra, Devi Parikh, Harsh Agrawal:
Contrast and Classify: Alternate Training for Robust VQA. CoRR abs/2010.06087 (2020)
2010 – 2019
- 2019
- [i1]Harshal Mittal, Kartikey Pandey, Yash Kant:
ICLR Reproducibility Challenge Report (Padam : Closing The Generalization Gap Of Adaptive Gradient Methods in Training Deep Neural Networks). CoRR abs/1901.09517 (2019)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-31 21:11 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint