default search action
Arsha Nagrani
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j2]Jaesung Huh, Joon Son Chung, Arsha Nagrani, Andrew Brown, Jee-weon Jung, Daniel Garcia-Romero, Andrew Zisserman:
The VoxCeleb Speaker Recognition Challenge: A Retrospective. IEEE ACM Trans. Audio Speech Lang. Process. 32: 3850-3866 (2024) - [c44]Juhong Min, Shyamal Buch, Arsha Nagrani, Minsu Cho, Cordelia Schmid:
MoReVQA: Exploring Modular Reasoning Models for Video Question Answering. CVPR 2024: 13235-13245 - [c43]Xi Chen, Josip Djolonga, Piotr Padlewski, Basil Mustafa, Soravit Changpinyo, Jialin Wu, Carlos Riquelme Ruiz, Sebastian Goodman, Xiao Wang, Yi Tay, Siamak Shakeri, Mostafa Dehghani, Daniel Salz, Mario Lucic, Michael Tschannen, Arsha Nagrani, Hexiang Hu, Mandar Joshi, Bo Pang, Ceslee Montgomery, Paulina Pietrzyk, Marvin Ritter, A. J. Piergiovanni, Matthias Minderer, Filip Pavetic, Austin Waters, Gang Li, Ibrahim Alabdulmohsin, Lucas Beyer, Julien Amelot, Kenton Lee, Andreas Peter Steiner, Yang Li, Daniel Keysers, Anurag Arnab, Yuanzhong Xu, Keran Rong, Alexander Kolesnikov, Mojtaba Seyedhosseini, Anelia Angelova, Xiaohua Zhai, Neil Houlsby, Radu Soricut:
On Scaling Up a Multilingual Vision and Language Model. CVPR 2024: 14432-14444 - [c42]Tengda Han, Max Bain, Arsha Nagrani, Gül Varol, Weidi Xie, Andrew Zisserman:
AutoAD III: The Prequel - Back to the Pixels. CVPR 2024: 18164-18174 - [c41]Xingyi Zhou, Anurag Arnab, Shyamal Buch, Shen Yan, Austin Myers, Xuehan Xiong, Arsha Nagrani, Cordelia Schmid:
Streaming Dense Video Captioning. CVPR 2024: 18243-18252 - [c40]Kumara Kahatapitiya, Anurag Arnab, Arsha Nagrani, Michael S. Ryoo:
VicTR: Video-conditioned Text Representations for Activity Recognition. CVPR 2024: 18547-18558 - [i58]Xingyi Zhou, Anurag Arnab, Shyamal Buch, Shen Yan, Austin Myers, Xuehan Xiong, Arsha Nagrani, Cordelia Schmid:
Streaming Dense Video Captioning. CoRR abs/2404.01297 (2024) - [i57]Juhong Min, Shyamal Buch, Arsha Nagrani, Minsu Cho, Cordelia Schmid:
MoReVQA: Exploring Modular Reasoning Models for Video Question Answering. CoRR abs/2404.06511 (2024) - [i56]Tengda Han, Max Bain, Arsha Nagrani, Gül Varol, Weidi Xie, Andrew Zisserman:
AutoAD III: The Prequel - Back to the Pixels. CoRR abs/2404.14412 (2024) - [i55]Junyu Xie, Tengda Han, Max Bain, Arsha Nagrani, Gül Varol, Weidi Xie, Andrew Zisserman:
AutoAD-Zero: A Training-Free Framework for Zero-Shot Audio Description. CoRR abs/2407.15850 (2024) - [i54]Gagan Jain, Nidhi Hegde, Aditya Kusupati, Arsha Nagrani, Shyamal Buch, Prateek Jain, Anurag Arnab, Sujoy Paul:
Mixture of Nested Experts: Adaptive Processing of Visual Tokens. CoRR abs/2407.19985 (2024) - [i53]Jaesung Huh, Joon Son Chung, Arsha Nagrani, Andrew Brown, Jee-weon Jung, Daniel Garcia-Romero, Andrew Zisserman:
The VoxCeleb Speaker Recognition Challenge: A Retrospective. CoRR abs/2408.14886 (2024) - 2023
- [c39]Sanjay Subramanian, Medhini Narasimhan, Kushal Khangaonkar, Kevin Yang, Arsha Nagrani, Cordelia Schmid, Andy Zeng, Trevor Darrell, Dan Klein:
Modular Visual Question Answering via Code Generation. ACL (2) 2023: 747-761 - [c38]Antoine Yang, Arsha Nagrani, Paul Hongsuck Seo, Antoine Miech, Jordi Pont-Tuset, Ivan Laptev, Josef Sivic, Cordelia Schmid:
Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense Video Captioning. CVPR 2023: 10714-10726 - [c37]Tengda Han, Max Bain, Arsha Nagrani, Gül Varol, Weidi Xie, Andrew Zisserman:
AutoAD: Movie Description in Context. CVPR 2023: 18930-18940 - [c36]Paul Hongsuck Seo, Arsha Nagrani, Cordelia Schmid:
AVFormer: Injecting Vision into Frozen Speech Models for Zero-Shot AV-ASR. CVPR 2023: 22922-22931 - [c35]Shen Yan, Xuehan Xiong, Arsha Nagrani, Anurag Arnab, Zhonghao Wang, Weina Ge, David Ross, Cordelia Schmid:
UnLoc: A Unified Framework for Video Localization Tasks. ICCV 2023: 13577-13587 - [c34]Tengda Han, Max Bain, Arsha Nagrani, Gül Varol, Weidi Xie, Andrew Zisserman:
AutoAD II: The Sequel - Who, When, and What in Movie Audio Description. ICCV 2023: 13599-13609 - [c33]Liliane Momeni, Mathilde Caron, Arsha Nagrani, Andrew Zisserman, Cordelia Schmid:
Verbs in Action: Improving verb understanding in video-language models. ICCV 2023: 15533-15545 - [c32]Taesik Gong, Josh Belanich, Krishna Somandepalli, Arsha Nagrani, Brian Eoff, Brendan Jou:
LanSER: Language-Model Supported Speech Emotion Recognition. INTERSPEECH 2023: 2408-2412 - [c31]Antoine Yang, Arsha Nagrani, Ivan Laptev, Josef Sivic, Cordelia Schmid:
VidChapters-7M: Video Chapters at Scale. NeurIPS 2023 - [i52]Jaesung Huh, Andrew Brown, Jee-weon Jung, Joon Son Chung, Arsha Nagrani, Daniel Garcia-Romero, Andrew Zisserman:
VoxSRC 2022: The Fourth VoxCeleb Speaker Recognition Challenge. CoRR abs/2302.10248 (2023) - [i51]Antoine Yang, Arsha Nagrani, Paul Hongsuck Seo, Antoine Miech, Jordi Pont-Tuset, Ivan Laptev, Josef Sivic, Cordelia Schmid:
Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense Video Captioning. CoRR abs/2302.14115 (2023) - [i50]Paul Hongsuck Seo, Arsha Nagrani, Cordelia Schmid:
AVFormer: Injecting Vision into Frozen Speech Models for Zero-Shot AV-ASR. CoRR abs/2303.16501 (2023) - [i49]Tengda Han, Max Bain, Arsha Nagrani, Gül Varol, Weidi Xie, Andrew Zisserman:
AutoAD: Movie Description in Context. CoRR abs/2303.16899 (2023) - [i48]Kumara Kahatapitiya, Anurag Arnab, Arsha Nagrani, Michael S. Ryoo:
VicTR: Video-conditioned Text Representations for Activity Recognition. CoRR abs/2304.02560 (2023) - [i47]Liliane Momeni, Mathilde Caron, Arsha Nagrani, Andrew Zisserman, Cordelia Schmid:
Verbs in Action: Improving verb understanding in video-language models. CoRR abs/2304.06708 (2023) - [i46]Xi Chen, Josip Djolonga, Piotr Padlewski, Basil Mustafa, Soravit Changpinyo, Jialin Wu, Carlos Riquelme Ruiz, Sebastian Goodman, Xiao Wang, Yi Tay, Siamak Shakeri, Mostafa Dehghani, Daniel Salz, Mario Lucic, Michael Tschannen, Arsha Nagrani, Hexiang Hu, Mandar Joshi, Bo Pang, Ceslee Montgomery, Paulina Pietrzyk, Marvin Ritter, A. J. Piergiovanni, Matthias Minderer, Filip Pavetic, Austin Waters, Gang Li, Ibrahim Alabdulmohsin, Lucas Beyer, Julien Amelot, Kenton Lee, Andreas Peter Steiner, Yang Li, Daniel Keysers, Anurag Arnab, Yuanzhong Xu, Keran Rong, Alexander Kolesnikov, Mojtaba Seyedhosseini, Anelia Angelova, Xiaohua Zhai, Neil Houlsby, Radu Soricut:
PaLI-X: On Scaling up a Multilingual Vision and Language Model. CoRR abs/2305.18565 (2023) - [i45]Sanjay Subramanian, Medhini Narasimhan, Kushal Khangaonkar, Kevin Yang, Arsha Nagrani, Cordelia Schmid, Andy Zeng, Trevor Darrell, Dan Klein:
Modular Visual Question Answering via Code Generation. CoRR abs/2306.05392 (2023) - [i44]Shen Yan, Xuehan Xiong, Arsha Nagrani, Anurag Arnab, Zhonghao Wang, Weina Ge, David Ross, Cordelia Schmid:
UnLoc: A Unified Framework for Video Localization Tasks. CoRR abs/2308.11062 (2023) - [i43]Taesik Gong, Josh Belanich, Krishna Somandepalli, Arsha Nagrani, Brian Eoff, Brendan Jou:
LanSER: Language-Model Supported Speech Emotion Recognition. CoRR abs/2309.03978 (2023) - [i42]Antoine Yang, Arsha Nagrani, Ivan Laptev, Josef Sivic, Cordelia Schmid:
VidChapters-7M: Video Chapters at Scale. CoRR abs/2309.13952 (2023) - [i41]Tengda Han, Max Bain, Arsha Nagrani, Gül Varol, Weidi Xie, Andrew Zisserman:
AutoAD II: The Sequel - Who, When, and What in Movie Audio Description. CoRR abs/2310.06838 (2023) - [i40]Hammad A. Ayyubi, Tianqi Liu, Arsha Nagrani, Xudong Lin, Mingda Zhang, Anurag Arnab, Feng Han, Yukun Zhu, Jialu Liu, Shih-Fu Chang:
Video Summarization: Towards Entity-Aware Captions. CoRR abs/2312.02188 (2023) - 2022
- [c30]Paul Hongsuck Seo, Arsha Nagrani, Anurag Arnab, Cordelia Schmid:
End-to-end Generative Pretraining for Multimodal Video Captioning. CVPR 2022: 17938-17947 - [c29]Arsha Nagrani, Paul Hongsuck Seo, Bryan Seybold, Anja Hauth, Santiago Manen, Chen Sun, Cordelia Schmid:
Learning Audio-Video Modalities from Image Captions. ECCV (14) 2022: 407-426 - [c28]Medhini Narasimhan, Arsha Nagrani, Chen Sun, Michael Rubinstein, Trevor Darrell, Anna Rohrbach, Cordelia Schmid:
TL;DW? Summarizing Instructional Videos with Task Relevance and Cross-Modal Saliency. ECCV (34) 2022: 540-557 - [c27]Valentin Gabeur, Paul Hongsuck Seo, Arsha Nagrani, Chen Sun, Karteek Alahari, Cordelia Schmid:
AVATAR: Unconstrained Audiovisual Speech Recognition. INTERSPEECH 2022: 2818-2822 - [c26]Valentin Gabeur, Arsha Nagrani, Chen Sun, Karteek Alahari, Cordelia Schmid:
Masking Modalities for Cross-modal Video Retrieval. WACV 2022: 2111-2120 - [i39]Andrew Brown, Jaesung Huh, Joon Son Chung, Arsha Nagrani, Andrew Zisserman:
VoxSRC 2021: The Third VoxCeleb Speaker Recognition Challenge. CoRR abs/2201.04583 (2022) - [i38]Paul Hongsuck Seo, Arsha Nagrani, Anurag Arnab, Cordelia Schmid:
End-to-end Generative Pretraining for Multimodal Video Captioning. CoRR abs/2201.08264 (2022) - [i37]Arsha Nagrani, Paul Hongsuck Seo, Bryan Seybold, Anja Hauth, Santiago Manen, Chen Sun, Cordelia Schmid:
Learning Audio-Video Modalities from Image Captions. CoRR abs/2204.00679 (2022) - [i36]Max Bain, Arsha Nagrani, Gül Varol, Andrew Zisserman:
A CLIP-Hitchhiker's Guide to Long Video Retrieval. CoRR abs/2205.08508 (2022) - [i35]Valentin Gabeur, Paul Hongsuck Seo, Arsha Nagrani, Chen Sun, Karteek Alahari, Cordelia Schmid:
AVATAR: Unconstrained Audiovisual Speech Recognition. CoRR abs/2206.07684 (2022) - [i34]Xuehan Xiong, Anurag Arnab, Arsha Nagrani, Cordelia Schmid:
M&M Mix: A Multimodal Multiview Transformer Ensemble. CoRR abs/2206.09852 (2022) - [i33]Medhini Narasimhan, Arsha Nagrani, Chen Sun, Michael Rubinstein, Trevor Darrell, Anna Rohrbach, Cordelia Schmid:
TL;DW? Summarizing Instructional Videos with Task Relevance & Cross-Modal Saliency. CoRR abs/2208.06773 (2022) - [i32]Paul Hongsuck Seo, Arsha Nagrani, Cordelia Schmid:
AVATAR submission to the Ego4D AV Transcription Challenge. CoRR abs/2211.09966 (2022) - 2021
- [c25]Triantafyllos Afouras, Honglie Chen, Weidi Xie, Arsha Nagrani, Andrea Vedaldi, Andrew Zisserman:
Audio-Visual Synchronisation in the wild. BMVC 2021: 261 - [c24]Evangelos Kazakos, Jaesung Huh, Arsha Nagrani, Andrew Zisserman, Dima Damen:
With a Little Help from my Temporal Context: Multimodal Egocentric Action Recognition. BMVC 2021: 268 - [c23]Honglie Chen, Weidi Xie, Triantafyllos Afouras, Arsha Nagrani, Andrea Vedaldi, Andrew Zisserman:
Localizing Visual Sounds the Hard Way. CVPR 2021: 16867-16876 - [c22]Paul Hongsuck Seo, Arsha Nagrani, Cordelia Schmid:
Look Before You Speak: Visually Contextualized Utterances. CVPR 2021: 16877-16887 - [c21]Evangelos Kazakos, Arsha Nagrani, Andrew Zisserman, Dima Damen:
Slow-Fast Auditory Streams for Audio Recognition. ICASSP 2021: 855-859 - [c20]Andrew Brown, Jaesung Huh, Arsha Nagrani, Joon Son Chung, Andrew Zisserman:
Playing a Part: Speaker Verification at the movies. ICASSP 2021: 6174-6178 - [c19]Max Bain, Arsha Nagrani, Gül Varol, Andrew Zisserman:
Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval. ICCV 2021: 1708-1718 - [c18]Chen Sun, Arsha Nagrani, Yonglong Tian, Cordelia Schmid:
Composable Augmentation Encoding for Video Representation Learning. ICCV 2021: 8814-8824 - [c17]Arsha Nagrani, Shan Yang, Anurag Arnab, Aren Jansen, Cordelia Schmid, Chen Sun:
Attention Bottlenecks for Multimodal Fusion. NeurIPS 2021: 14200-14213 - [i31]Hazel Doughty, Nour Karessli, Kathryn Leonard, Boyi Li, Carianne Martinez, Azadeh Mobasher, Arsha Nagrani, Srishti Yadav:
WiCV 2020: The Seventh Women In Computer Vision Workshop. CoRR abs/2101.03787 (2021) - [i30]Evangelos Kazakos, Arsha Nagrani, Andrew Zisserman, Dima Damen:
Slow-Fast Auditory Streams For Audio Recognition. CoRR abs/2103.03516 (2021) - [i29]Chen Sun, Arsha Nagrani, Yonglong Tian, Cordelia Schmid:
Composable Augmentation Encoding for Video Representation Learning. CoRR abs/2104.00616 (2021) - [i28]Max Bain, Arsha Nagrani, Gül Varol, Andrew Zisserman:
Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval. CoRR abs/2104.00650 (2021) - [i27]Honglie Chen, Weidi Xie, Triantafyllos Afouras, Arsha Nagrani, Andrea Vedaldi, Andrew Zisserman:
Localizing Visual Sounds the Hard Way. CoRR abs/2104.02691 (2021) - [i26]Arsha Nagrani, Shan Yang, Anurag Arnab, Aren Jansen, Cordelia Schmid, Chen Sun:
Attention Bottlenecks for Multimodal Fusion. CoRR abs/2107.00135 (2021) - [i25]Evangelos Kazakos, Jaesung Huh, Arsha Nagrani, Andrew Zisserman, Dima Damen:
With a Little Help from my Temporal Context: Multimodal Egocentric Action Recognition. CoRR abs/2111.01024 (2021) - [i24]Valentin Gabeur, Arsha Nagrani, Chen Sun, Karteek Alahari, Cordelia Schmid:
Masking Modalities for Cross-modal Video Retrieval. CoRR abs/2111.01300 (2021) - [i23]Honglie Chen, Weidi Xie, Triantafyllos Afouras, Arsha Nagrani, Andrea Vedaldi, Andrew Zisserman:
Audio-Visual Synchronisation in the wild. CoRR abs/2112.04432 (2021) - 2020
- [b1]Arsha Nagrani:
Video understanding using multimodal deep learning. University of Oxford, UK, 2020 - [j1]Arsha Nagrani, Joon Son Chung, Weidi Xie, Andrew Zisserman:
Voxceleb: Large-scale speaker verification in the wild. Comput. Speech Lang. 60 (2020) - [c16]Max Bain, Arsha Nagrani, Andrew Brown, Andrew Zisserman:
Condensed Movies: Story Based Retrieval with Contextual Embeddings. ACCV (5) 2020: 460-479 - [c15]Arsha Nagrani, Chen Sun, David Ross, Rahul Sukthankar, Cordelia Schmid, Andrew Zisserman:
Speech2Action: Cross-Modal Supervision for Action Recognition. CVPR 2020: 10314-10323 - [c14]Anurag Arnab, Chen Sun, Arsha Nagrani, Cordelia Schmid:
Uncertainty-Aware Weakly Supervised Action Detection from Untrimmed Videos. ECCV (10) 2020: 751-768 - [c13]Arsha Nagrani, Joon Son Chung, Samuel Albanie, Andrew Zisserman:
Disentangled Speech Embeddings Using Cross-Modal Self-Supervision. ICASSP 2020: 6829-6833 - [c12]Joon Son Chung, Jaesung Huh, Arsha Nagrani, Triantafyllos Afouras, Andrew Zisserman:
Spot the Conversation: Speaker Diarisation in the Wild. INTERSPEECH 2020: 299-303 - [i22]Arsha Nagrani, Joon Son Chung, Samuel Albanie, Andrew Zisserman:
Disentangled Speech Embeddings using Cross-modal Self-supervision. CoRR abs/2002.08742 (2020) - [i21]Arsha Nagrani, Chen Sun, David Ross, Rahul Sukthankar, Cordelia Schmid, Andrew Zisserman:
Speech2Action: Cross-modal Supervision for Action Recognition. CoRR abs/2003.13594 (2020) - [i20]Max Bain, Arsha Nagrani, Andrew Brown, Andrew Zisserman:
Condensed Movies: Story Based Retrieval with Contextual Embeddings. CoRR abs/2005.04208 (2020) - [i19]Joon Son Chung, Jaesung Huh, Arsha Nagrani, Triantafyllos Afouras, Andrew Zisserman:
Spot the conversation: speaker diarisation in the wild. CoRR abs/2007.01216 (2020) - [i18]Anurag Arnab, Chen Sun, Arsha Nagrani, Cordelia Schmid:
Uncertainty-Aware Weakly Supervised Action Detection from Untrimmed Videos. CoRR abs/2007.10703 (2020) - [i17]Samuel Albanie, Yang Liu, Arsha Nagrani, Antoine Miech, Ernesto Coto, Ivan Laptev, Rahul Sukthankar, Bernard Ghanem, Andrew Zisserman, Valentin Gabeur, Chen Sun, Karteek Alahari, Cordelia Schmid, Shizhe Chen, Yida Zhao, Qin Jin, Kaixu Cui, Hui Liu, Chen Wang, Yudong Jiang, Xiaoshuai Hao:
The End-of-End-to-End: A Video Understanding Pentathlon Challenge (2020). CoRR abs/2008.00744 (2020) - [i16]Piyush Bagad, Aman Dalmia, Jigar Doshi, Arsha Nagrani, Parag Bhamare, Amrita Mahale, Saurabh Rane, Neeraj Agarwal, Rahul Panicker:
Cough Against COVID: Evidence of COVID-19 Signature in Cough Sounds. CoRR abs/2009.08790 (2020) - [i15]Andrew Brown, Jaesung Huh, Arsha Nagrani, Joon Son Chung, Andrew Zisserman:
Playing a Part: Speaker Verification at the Movies. CoRR abs/2010.15716 (2020) - [i14]Paul Hongsuck Seo, Arsha Nagrani, Cordelia Schmid:
Look Before you Speak: Visually Contextualized Utterances. CoRR abs/2012.05710 (2020) - [i13]Arsha Nagrani, Joon Son Chung, Jaesung Huh, Andrew Brown, Ernesto Coto, Weidi Xie, Mitchell McLaren, Douglas A. Reynolds, Andrew Zisserman:
VoxSRC 2020: The Second VoxCeleb Speaker Recognition Challenge. CoRR abs/2012.06867 (2020)
2010 – 2019
- 2019
- [c11]Yang Liu, Samuel Albanie, Arsha Nagrani, Andrew Zisserman:
Use What You Have: Video retrieval using representations from collaborative experts. BMVC 2019: 279 - [c10]Irene Amerini, Elena Balashova, Sayna Ebrahimi, Kathryn Leonard, Arsha Nagrani, Amaia Salvador:
WiCV 2019: The Sixth Women In Computer Vision Workshop. CVPR Workshops 2019: 469-471 - [c9]Weidi Xie, Arsha Nagrani, Joon Son Chung, Andrew Zisserman:
Utterance-level Aggregation for Speaker Recognition in the Wild. ICASSP 2019: 5791-5795 - [c8]Evangelos Kazakos, Arsha Nagrani, Andrew Zisserman, Dima Damen:
EPIC-Fusion: Audio-Visual Temporal Binding for Egocentric Action Recognition. ICCV 2019: 5491-5500 - [c7]Max Bain, Arsha Nagrani, Daniel Schofield, Andrew Zisserman:
Count, Crop and Recognise: Fine-Grained Recognition in the Wild. ICCV Workshops 2019: 236-246 - [i12]Weidi Xie, Arsha Nagrani, Joon Son Chung, Andrew Zisserman:
Utterance-level Aggregation For Speaker Recognition In The Wild. CoRR abs/1902.10107 (2019) - [i11]Yang Liu, Samuel Albanie, Arsha Nagrani, Andrew Zisserman:
Use What You Have: Video Retrieval Using Representations From Collaborative Experts. CoRR abs/1907.13487 (2019) - [i10]Evangelos Kazakos, Arsha Nagrani, Andrew Zisserman, Dima Damen:
EPIC-Fusion: Audio-Visual Temporal Binding for Egocentric Action Recognition. CoRR abs/1908.08498 (2019) - [i9]Max Bain, Arsha Nagrani, Daniel Schofield, Andrew Zisserman:
Count, Crop and Recognise: Fine-Grained Recognition in the Wild. CoRR abs/1909.08950 (2019) - [i8]Irene Amerini, Elena Balashova, Sayna Ebrahimi, Kathryn Leonard, Arsha Nagrani, Amaia Salvador:
WiCV 2019: The Sixth Women In Computer Vision Workshop. CoRR abs/1909.10225 (2019) - [i7]Joon Son Chung, Arsha Nagrani, Ernesto Coto, Weidi Xie, Mitchell McLaren, Douglas A. Reynolds, Andrew Zisserman:
VoxSRC 2019: The first VoxCeleb Speaker Recognition Challenge. CoRR abs/1912.02522 (2019) - 2018
- [c6]Arsha Nagrani, Samuel Albanie, Andrew Zisserman:
Seeing Voices and Hearing Faces: Cross-Modal Biometric Matching. CVPR 2018: 8427-8436 - [c5]Arsha Nagrani, Samuel Albanie, Andrew Zisserman:
Learnable PINs: Cross-modal Embeddings for Person Identity. ECCV (13) 2018: 73-89 - [c4]Joon Son Chung, Arsha Nagrani, Andrew Zisserman:
VoxCeleb2: Deep Speaker Recognition. INTERSPEECH 2018: 1086-1090 - [c3]Samuel Albanie, Arsha Nagrani, Andrea Vedaldi, Andrew Zisserman:
Emotion Recognition in Speech using Cross-Modal Transfer in the Wild. ACM Multimedia 2018: 292-301 - [i6]Arsha Nagrani, Andrew Zisserman:
From Benedict Cumberbatch to Sherlock Holmes: Character Identification in TV series without a Script. CoRR abs/1801.10442 (2018) - [i5]Arsha Nagrani, Samuel Albanie, Andrew Zisserman:
Seeing Voices and Hearing Faces: Cross-modal biometric matching. CoRR abs/1804.00326 (2018) - [i4]Arsha Nagrani, Samuel Albanie, Andrew Zisserman:
Learnable PINs: Cross-Modal Embeddings for Person Identity. CoRR abs/1805.00833 (2018) - [i3]Joon Son Chung, Arsha Nagrani, Andrew Zisserman:
VoxCeleb2: Deep Speaker Recognition. CoRR abs/1806.05622 (2018) - [i2]Samuel Albanie, Arsha Nagrani, Andrea Vedaldi, Andrew Zisserman:
Emotion Recognition in Speech using Cross-Modal Transfer in the Wild. CoRR abs/1808.05561 (2018) - 2017
- [c2]Arsha Nagrani, Andrew Zisserman:
From Benedict Cumberbatch to Sherlock Holmes: Character Identification in TV series without a Script. BMVC 2017 - [c1]Arsha Nagrani, Joon Son Chung, Andrew Zisserman:
VoxCeleb: A Large-Scale Speaker Identification Dataset. INTERSPEECH 2017: 2616-2620 - [i1]Arsha Nagrani, Joon Son Chung, Andrew Zisserman:
VoxCeleb: a large-scale speaker identification dataset. CoRR abs/1706.08612 (2017)
Coauthor Index
aka: Paul Hongsuck Seo
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-30 21:30 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint