


Остановите войну!
for scientists:


default search action
Triantafyllos Afouras
Person information

Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2022
- [j2]Gül Varol
, Liliane Momeni, Samuel Albanie, Triantafyllos Afouras, Andrew Zisserman:
Scaling Up Sign Spotting Through Sign Language Dictionaries. Int. J. Comput. Vis. 130(6): 1416-1439 (2022) - [j1]Triantafyllos Afouras, Joon Son Chung
, Andrew W. Senior, Oriol Vinyals
, Andrew Zisserman
:
Deep Audio-Visual Speech Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 44(12): 8717-8727 (2022) - [c22]K. R. Prajwal, Triantafyllos Afouras, Andrew Zisserman:
Sub-word Level Lip Reading With Visual Attention. CVPR 2022: 5152-5162 - [c21]Akam Rahimi, Triantafyllos Afouras, Andrew Zisserman:
Reading to Listen at the Cocktail Party: Multi-Modal Speech Separation. CVPR 2022: 10483-10492 - [c20]Triantafyllos Afouras, Yuki M. Asano, Francois Fagan, Andrea Vedaldi, Florian Metze:
Self-supervised object detection from audio-visual correspondence. CVPR 2022: 10565-10576 - [i21]Gül Varol, Liliane Momeni, Samuel Albanie, Triantafyllos Afouras, Andrew Zisserman:
Scaling up sign spotting through sign language dictionaries. CoRR abs/2205.04152 (2022) - 2021
- [c19]Triantafyllos Afouras, Honglie Chen, Weidi Xie, Arsha Nagrani, Andrea Vedaldi, Andrew Zisserman:
Audio-Visual Synchronisation in the wild. BMVC 2021: 261 - [c18]K. R. Prajwal, Liliane Momeni, Triantafyllos Afouras, Andrew Zisserman:
Visual Keyword Spotting with Attention. BMVC 2021: 380 - [c17]Gül Varol, Liliane Momeni, Samuel Albanie, Triantafyllos Afouras, Andrew Zisserman:
Read and Attend: Temporal Localisation in Sign Language Videos. CVPR 2021: 16857-16866 - [c16]Honglie Chen, Weidi Xie, Triantafyllos Afouras, Arsha Nagrani, Andrea Vedaldi, Andrew Zisserman:
Localizing Visual Sounds the Hard Way. CVPR 2021: 16867-16876 - [c15]Samuel Albanie, Gül Varol, Liliane Momeni, Triantafyllos Afouras, Andrew Brown, Chuhan Zhang, Ernesto Coto, Necati Cihan Camgöz, Ben Saunders, Abhishek Dutta
, Neil Fox
, Richard Bowden
, Bencie Woll, Andrew Zisserman:
SeeHear: Signer Diarisation and a New Dataset. ICASSP 2021: 2280-2284 - [c14]Hannah Bull, Triantafyllos Afouras, Gül Varol, Samuel Albanie, Liliane Momeni, Andrew Zisserman:
Aligning Subtitles in Sign Language Videos. ICCV 2021: 11532-11541 - [i20]Gül Varol, Liliane Momeni, Samuel Albanie, Triantafyllos Afouras, Andrew Zisserman:
Read and Attend: Temporal Localisation in Sign Language Videos. CoRR abs/2103.16481 (2021) - [i19]Honglie Chen, Weidi Xie, Triantafyllos Afouras, Arsha Nagrani, Andrea Vedaldi, Andrew Zisserman:
Localizing Visual Sounds the Hard Way. CoRR abs/2104.02691 (2021) - [i18]Triantafyllos Afouras, Yuki Markus Asano, Francois Fagan, Andrea Vedaldi, Florian Metze:
Self-supervised object detection from audio-visual correspondence. CoRR abs/2104.06401 (2021) - [i17]Hannah Bull, Triantafyllos Afouras, Gül Varol, Samuel Albanie, Liliane Momeni, Andrew Zisserman:
Aligning Subtitles in Sign Language Videos. CoRR abs/2105.02877 (2021) - [i16]K. R. Prajwal, Triantafyllos Afouras, Andrew Zisserman:
Sub-word Level Lip Reading With Visual Attention. CoRR abs/2110.07603 (2021) - [i15]K. R. Prajwal, Liliane Momeni, Triantafyllos Afouras, Andrew Zisserman:
Visual Keyword Spotting with Attention. CoRR abs/2110.15957 (2021) - [i14]Samuel Albanie, Gül Varol, Liliane Momeni, Hannah Bull, Triantafyllos Afouras, Himel Chowdhury, Neil Fox, Bencie Woll, Rob Cooper, Andrew McParland, Andrew Zisserman:
BBC-Oxford British Sign Language Dataset. CoRR abs/2111.03635 (2021) - [i13]Honglie Chen, Weidi Xie, Triantafyllos Afouras, Arsha Nagrani, Andrea Vedaldi, Andrew Zisserman:
Audio-Visual Synchronisation in the wild. CoRR abs/2112.04432 (2021) - 2020
- [c13]Liliane Momeni, Gül Varol, Samuel Albanie, Triantafyllos Afouras, Andrew Zisserman:
Watch, Read and Lookup: Learning to Spot Signs from Multiple Supervisors. ACCV (6) 2020: 291-308 - [c12]Liliane Momeni, Triantafyllos Afouras, Themos Stafylakis, Samuel Albanie, Andrew Zisserman:
Seeing wake words: Audio-visual Keyword Spotting. BMVC 2020 - [c11]Samuel Albanie, Gül Varol, Liliane Momeni, Triantafyllos Afouras, Joon Son Chung, Neil Fox
, Andrew Zisserman:
BSL-1K: Scaling Up Co-articulated Sign Language Recognition Using Mouthing Cues. ECCV (11) 2020: 35-53 - [c10]Triantafyllos Afouras, Andrew Owens, Joon Son Chung, Andrew Zisserman:
Self-supervised Learning of Audio-Visual Objects from Video. ECCV (18) 2020: 208-224 - [c9]Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman:
ASR is All You Need: Cross-Modal Distillation for Lip Reading. ICASSP 2020: 2143-2147 - [c8]Joon Son Chung, Jaesung Huh, Arsha Nagrani, Triantafyllos Afouras, Andrew Zisserman:
Spot the Conversation: Speaker Diarisation in the Wild. INTERSPEECH 2020: 299-303 - [c7]Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman:
Now You're Speaking My Language: Visual Language Identification. INTERSPEECH 2020: 2402-2406 - [i12]Joon Son Chung, Jaesung Huh, Arsha Nagrani, Triantafyllos Afouras, Andrew Zisserman:
Spot the conversation: speaker diarisation in the wild. CoRR abs/2007.01216 (2020) - [i11]Samuel Albanie, Gül Varol, Liliane Momeni, Triantafyllos Afouras, Joon Son Chung, Neil Fox, Andrew Zisserman:
BSL-1K: Scaling up co-articulated sign language recognition using mouthing cues. CoRR abs/2007.12131 (2020) - [i10]Triantafyllos Afouras, Andrew Owens, Joon Son Chung, Andrew Zisserman:
Self-Supervised Learning of Audio-Visual Objects from Video. CoRR abs/2008.04237 (2020) - [i9]Liliane Momeni, Triantafyllos Afouras, Themos Stafylakis, Samuel Albanie, Andrew Zisserman:
Seeing wake words: Audio-visual Keyword Spotting. CoRR abs/2009.01225 (2020) - [i8]Liliane Momeni, Gül Varol, Samuel Albanie, Triantafyllos Afouras, Andrew Zisserman:
Watch, read and lookup: learning to spot signs from multiple supervisors. CoRR abs/2010.04002 (2020)
2010 – 2019
- 2019
- [c6]Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman:
My Lips Are Concealed: Audio-Visual Speech Enhancement Through Obstructions. INTERSPEECH 2019: 4295-4299 - [i7]Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman:
My lips are concealed: Audio-visual speech enhancement through obstructions. CoRR abs/1907.04975 (2019) - [i6]Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman:
ASR is all you need: cross-modal distillation for lip reading. CoRR abs/1911.12747 (2019) - 2018
- [c5]Jakob N. Foerster, Gregory Farquhar, Triantafyllos Afouras, Nantas Nardelli, Shimon Whiteson:
Counterfactual Multi-Agent Policy Gradients. AAAI 2018: 2974-2982 - [c4]Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman:
The Conversation: Deep Audio-Visual Speech Enhancement. INTERSPEECH 2018: 3244-3248 - [c3]Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman:
Deep Lip Reading: A Comparison of Models and an Online Application. INTERSPEECH 2018: 3514-3518 - [i5]Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman:
The Conversation: Deep Audio-Visual Speech Enhancement. CoRR abs/1804.04121 (2018) - [i4]Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman:
Deep Lip Reading: a comparison of models and an online application. CoRR abs/1806.06053 (2018) - [i3]Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman:
LRS3-TED: a large-scale dataset for visual speech recognition. CoRR abs/1809.00496 (2018) - [i2]Triantafyllos Afouras, Joon Son Chung, Andrew W. Senior, Oriol Vinyals, Andrew Zisserman:
Deep Audio-Visual Speech Recognition. CoRR abs/1809.02108 (2018) - 2017
- [c2]Jakob N. Foerster, Nantas Nardelli, Gregory Farquhar, Triantafyllos Afouras, Philip H. S. Torr, Pushmeet Kohli, Shimon Whiteson:
Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning. ICML 2017: 1146-1155 - [i1]Jakob N. Foerster, Gregory Farquhar, Triantafyllos Afouras, Nantas Nardelli, Shimon Whiteson:
Counterfactual Multi-Agent Policy Gradients. CoRR abs/1705.08926 (2017) - 2015
- [c1]Matthias Thoma, Triantafyllos Afouras, Torsten Braun
:
An Application-Layer Restful Sleepy Nodes Implementation for Internet of Things Systems. WMNC 2015: 16-23
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
load content from web.archive.org
Privacy notice: By enabling the option above, your browser will contact the API of web.archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2023-03-22 23:56 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint