


Остановите войну!
for scientists:


default search action
Joon Son Chung
Person information

Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2023
- [i56]Jaesung Huh, Andrew Brown, Jee-weon Jung, Joon Son Chung, Arsha Nagrani, Daniel Garcia-Romero, Andrew Zisserman:
VoxSRC 2022: The Fourth VoxCeleb Speaker Recognition Challenge. CoRR abs/2302.10248 (2023) - [i55]Jiyoung Lee, Joon Son Chung, Soo-Whan Chung:
Imaginary Voice: Face-styled Diffusion Model for Text-to-Speech. CoRR abs/2302.13700 (2023) - [i54]Youngjoon Jang, Youngtaek Oh, Jae-Won Cho, Myungchul Kim, Dong-Jin Kim, In So Kweon, Joon Son Chung:
Self-Sufficient Framework for Continuous Sign Language Recognition. CoRR abs/2303.11771 (2023) - [i53]Hyeonggon Ryu, Arda Senocak, In So Kweon, Joon Son Chung:
Hindi as a Second Language: Improving Visually Grounded Speech with Semantically Similar Samples. CoRR abs/2303.17517 (2023) - [i52]Youngjoon Jang, Kyeongha Rho, Jong-Bin Woo, Hyeongkeun Lee, Jihwan Park, Youshin Lim, Byeong-Yeol Kim, Joon Son Chung:
That's What I Said: Fully-Controllable Talking Face Generation. CoRR abs/2304.03275 (2023) - 2022
- [j6]Jingu Kang
, Jaesung Huh, Hee Soo Heo, Joon Son Chung
:
Augmentation Adversarial Training for Self-Supervised Speaker Representation Learning. IEEE J. Sel. Top. Signal Process. 16(6): 1253-1262 (2022) - [j5]Triantafyllos Afouras, Joon Son Chung
, Andrew W. Senior, Oriol Vinyals
, Andrew Zisserman
:
Deep Audio-Visual Speech Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 44(12): 8717-8727 (2022) - [c41]Youngjoon Jang, Youngtaek Oh, Jae-Won Cho, Dong-Jin Kim, Joon Son Chung, In So Kweon:
Signing Outside the Studio: Benchmarking Background Robustness for Continuous Sign Language Recognition. BMVC 2022: 322 - [c40]Jee-weon Jung, Hee-Soo Heo, Hemlata Tak, Hye-jin Shim, Joon Son Chung, Bong-Jin Lee, Ha-Jin Yu, Nicholas W. D. Evans:
AASIST: Audio Anti-Spoofing Using Integrated Spectro-Temporal Graph Attention Networks. ICASSP 2022: 6367-6371 - [c39]Namkyu Jung
, Geonmin Kim, Joon Son Chung:
Spell My Name: Keyword Boosted Speech Recognition. ICASSP 2022: 6642-6646 - [c38]Youngki Kwon, Hee-Soo Heo, Jee-Weon Jung, You Jin Kim, Bong-Jin Lee, Joon Son Chung:
Multi-Scale Speaker Embedding-Based Graph Attention Networks For Speaker Diarisation. ICASSP 2022: 8367-8371 - [c37]Jee-weon Jung, You Jin Kim, Hee-Soo Heo, Bong-Jin Lee, Youngki Kwon, Joon Son Chung:
Pushing the limits of raw waveform speaker recognition. INTERSPEECH 2022: 2228-2232 - [c36]Hye-jin Shim, Hemlata Tak, Xuechen Liu, Hee-Soo Heo, Jee-weon Jung, Joon Son Chung, Soo-Whan Chung, Ha-Jin Yu, Bong-Jin Lee, Massimiliano Todisco, Héctor Delgado, Kong Aik Lee, Md. Sahidullah, Tomi Kinnunen, Nicholas W. D. Evans:
Baseline Systems for the First Spoofing-Aware Speaker Verification Challenge: Score and Embedding Fusion. Odyssey 2022: 330-337 - [i51]Andrew Brown, Jaesung Huh, Joon Son Chung, Arsha Nagrani, Andrew Zisserman:
VoxSRC 2021: The Third VoxCeleb Speaker Recognition Challenge. CoRR abs/2201.04583 (2022) - [i50]Jee-weon Jung, You Jin Kim, Hee-Soo Heo, Bong-Jin Lee, Youngki Kwon, Joon Son Chung:
Pushing the limits of raw waveform speaker recognition. CoRR abs/2203.08488 (2022) - [i49]Hye-jin Shim, Hemlata Tak, Xuechen Liu, Hee-Soo Heo, Jee-weon Jung, Joon Son Chung, Soo-Whan Chung, Ha-Jin Yu, Bong-Jin Lee, Massimiliano Todisco, Héctor Delgado, Kong Aik Lee, Md. Sahidullah, Tomi Kinnunen, Nicholas W. D. Evans:
Baseline Systems for the First Spoofing-Aware Speaker Verification Challenge: Score and Embedding Fusion. CoRR abs/2204.09976 (2022) - [i48]Jee-weon Jung, Hee-Soo Heo, Bong-Jin Lee, Jaesong Lee, Hye-jin Shim, Youngki Kwon, Joon Son Chung, Shinji Watanabe
:
Large-scale learning of generalised representations for speaker recognition. CoRR abs/2210.10985 (2022) - [i47]Jee-weon Jung, Hee-Soo Heo, Bong-Jin Lee, Jaesung Huh, Andrew Brown, Youngki Kwon, Shinji Watanabe
, Joon Son Chung:
In search of strong embedding extractors for speaker diarisation. CoRR abs/2210.14682 (2022) - [i46]Kihyun Nam, Youkyum Kim, Hee Soo Heo, Jee-weon Jung, Joon Son Chung:
Disentangled representation learning for multilingual speaker recognition. CoRR abs/2211.00437 (2022) - [i45]Jaemin Jung, Youkyum Kim, Jihwan Park, Youshin Lim, Byeong-Yeol Kim, Youngjoon Jang, Joon Son Chung:
Metric Learning for User-defined Keyword Spotting. CoRR abs/2211.00439 (2022) - [i44]Youngjoon Jang, Youngtaek Oh, Jae-Won Cho, Dong-Jin Kim, Joon Son Chung, In So Kweon:
Signing Outside the Studio: Benchmarking Background Robustness for Continuous Sign Language Recognition. CoRR abs/2211.00448 (2022) - [i43]Sooyoung Park, Arda Senocak, Joon Son Chung:
MarginNCE: Robust Sound Localization with a Negative Margin. CoRR abs/2211.01966 (2022) - 2021
- [c35]Yoohwan Kwon, Hee-Soo Heo, Bong-Jin Lee, Joon Son Chung:
The ins and outs of speaker recognition: lessons from VoxSRC 2020. ICASSP 2021: 5809-5813 - [c34]Jee-weon Jung, Hee-Soo Heo, Ha-Jin Yu, Joon Son Chung:
Graph Attention Networks for Speaker Verification. ICASSP 2021: 6149-6153 - [c33]Andrew Brown, Jaesung Huh, Arsha Nagrani, Joon Son Chung, Andrew Zisserman:
Playing a Part: Speaker Verification at the movies. ICASSP 2021: 6174-6178 - [c32]Jee-weon Jung, Hee-Soo Heo, Youngki Kwon, Joon Son Chung, Bong-Jin Lee:
Three-Class Overlapped Speech Detection Using a Convolutional Recurrent Neural Network. Interspeech 2021: 3086-3090 - [c31]Youngki Kwon, Jee-weon Jung, Hee-Soo Heo, You Jin Kim, Bong-Jin Lee, Joon Son Chung:
Adapting Speaker Embeddings for Speaker Diarisation. Interspeech 2021: 3101-3105 - [c30]You Jin Kim, Hee-Soo Heo, Soyeon Choe, Soo-Whan Chung, Yoohwan Kwon, Bong-Jin Lee, Youngki Kwon, Joon Son Chung:
Look Who's Talking: Active Speaker Detection in the Wild. Interspeech 2021: 3675-3679 - [c29]Jaesung Huh, Minjae Lee, Heesoo Heo, Seongkyu Mun, Joon Son Chung:
Metric Learning for Keyword Spotting. SLT 2021: 133-140 - [c28]Seong Min Kye, Joon Son Chung, Hoirin Kim:
Supervised Attention for Speaker Recognition. SLT 2021: 286-293 - [c27]Seong Min Kye, Yoohwan Kwon, Joon Son Chung:
Cross Attentive Pooling for Speaker Verification. SLT 2021: 294-300 - [c26]Youngki Kwon, Hee Soo Heo, Jaesung Huh, Bong-Jin Lee, Joon Son Chung:
Look Who's Not Talking. SLT 2021: 567-573 - [i42]Jee-weon Jung, Hee-Soo Heo, Youngki Kwon, Joon Son Chung, Bong-Jin Lee:
Three-class Overlapped Speech Detection using a Convolutional Recurrent Neural Network. CoRR abs/2104.02878 (2021) - [i41]Youngki Kwon, Jee-weon Jung, Hee-Soo Heo, You Jin Kim, Bong-Jin Lee, Joon Son Chung:
Adapting Speaker Embeddings for Speaker Diarisation. CoRR abs/2104.02879 (2021) - [i40]You Jin Kim, Hee-Soo Heo, Soyeon Choe, Soo-Whan Chung, Yoohwan Kwon, Bong-Jin Lee, Youngki Kwon, Joon Son Chung:
Look Who's Talking: Active Speaker Detection in the Wild. CoRR abs/2108.07640 (2021) - [i39]Jee-weon Jung, Hee-Soo Heo, Hemlata Tak, Hye-jin Shim, Joon Son Chung, Bong-Jin Lee, Ha-Jin Yu, Nicholas W. D. Evans:
AASIST: Audio Anti-Spoofing using Integrated Spectro-Temporal Graph Attention Networks. CoRR abs/2110.01200 (2021) - [i38]Namkyu Jung, Geonmin Kim, Joon Son Chung:
Spell my name: keyword boosted speech recognition. CoRR abs/2110.02791 (2021) - [i37]Youngki Kwon, Hee-Soo Heo, Jee-weon Jung, You Jin Kim, Bong-Jin Lee, Joon Son Chung:
Multi-scale speaker embedding-based graph attention networks for speaker diarisation. CoRR abs/2110.03361 (2021) - [i36]You Jin Kim, Hee-Soo Heo, Jee-weon Jung, Youngki Kwon, Bong-Jin Lee, Joon Son Chung:
Disentangled dimensionality reduction for noise-robust speaker diarisation. CoRR abs/2110.03380 (2021) - 2020
- [j4]Arsha Nagrani
, Joon Son Chung, Weidi Xie, Andrew Zisserman:
Voxceleb: Large-scale speaker verification in the wild. Comput. Speech Lang. 60 (2020) - [j3]Soo-Whan Chung
, Joon Son Chung
, Hong-Goo Kang
:
Perfect Match: Self-Supervised Embeddings for Cross-Modal Retrieval. IEEE J. Sel. Top. Signal Process. 14(3): 568-576 (2020) - [c25]Samuel Albanie, Gül Varol, Liliane Momeni, Triantafyllos Afouras, Joon Son Chung, Neil Fox
, Andrew Zisserman:
BSL-1K: Scaling Up Co-articulated Sign Language Recognition Using Mouthing Cues. ECCV (11) 2020: 35-53 - [c24]Triantafyllos Afouras, Andrew Owens, Joon Son Chung, Andrew Zisserman:
Self-supervised Learning of Audio-Visual Objects from Video. ECCV (18) 2020: 208-224 - [c23]Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman:
ASR is All You Need: Cross-Modal Distillation for Lip Reading. ICASSP 2020: 2143-2147 - [c22]Arsha Nagrani, Joon Son Chung, Samuel Albanie, Andrew Zisserman:
Disentangled Speech Embeddings Using Cross-Modal Self-Supervision. ICASSP 2020: 6829-6833 - [c21]Seongkyu Mun, Soyeon Choe, Jaesung Huh, Joon Son Chung:
The Sound of My Voice: Speaker Representation Loss for Target Voice Separation. ICASSP 2020: 7289-7293 - [c20]Joon Son Chung, Jaesung Huh, Arsha Nagrani, Triantafyllos Afouras, Andrew Zisserman:
Spot the Conversation: Speaker Diarisation in the Wild. INTERSPEECH 2020: 299-303 - [c19]Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman:
Now You're Speaking My Language: Visual Language Identification. INTERSPEECH 2020: 2402-2406 - [c18]Joon Son Chung, Jaesung Huh, Seongkyu Mun, Minjae Lee, Hee-Soo Heo, Soyeon Choe, Chiheon Ham, Sunghwan Jung, Bong-Jin Lee, Icksang Han:
In Defence of Metric Learning for Speaker Recognition. INTERSPEECH 2020: 2977-2981 - [c17]Soo-Whan Chung, Soyeon Choe, Joon Son Chung, Hong-Goo Kang:
FaceFilter: Audio-Visual Speech Separation Using Still Images. INTERSPEECH 2020: 3481-3485 - [c16]Soo-Whan Chung, Hong-Goo Kang, Joon Son Chung:
Seeing Voices and Hearing Voices: Learning Discriminative Embeddings Using Cross-Modal Self-Supervision. INTERSPEECH 2020: 3486-3490 - [c15]Joon Son Chung, Jaesung Huh, Seongkyu Mun:
Delving into VoxCeleb: Environment Invariant Speaker Recognition. Odyssey 2020: 349-356 - [i35]Arsha Nagrani, Joon Son Chung, Samuel Albanie, Andrew Zisserman:
Disentangled Speech Embeddings using Cross-modal Self-supervision. CoRR abs/2002.08742 (2020) - [i34]Joon Son Chung, Jaesung Huh, Seongkyu Mun, Minjae Lee, Hee Soo Heo, Soyeon Choe, Chiheon Ham, Sunghwan Jung, Bong-Jin Lee, Icksang Han:
In defence of metric learning for speaker recognition. CoRR abs/2003.11982 (2020) - [i33]Soo-Whan Chung, Hong-Goo Kang, Joon Son Chung:
Seeing voices and hearing voices: learning discriminative embeddings using cross-modal self-supervision. CoRR abs/2004.14326 (2020) - [i32]Soo-Whan Chung, Soyeon Choe, Joon Son Chung, Hong-Goo Kang:
FaceFilter: Audio-visual speech separation using still images. CoRR abs/2005.07074 (2020) - [i31]Jaesung Huh, Minjae Lee, Heesoo Heo, Seongkyu Mun, Joon Son Chung:
Metric Learning for Keyword Spotting. CoRR abs/2005.08776 (2020) - [i30]Joon Son Chung, Jaesung Huh, Arsha Nagrani, Triantafyllos Afouras, Andrew Zisserman:
Spot the conversation: speaker diarisation in the wild. CoRR abs/2007.01216 (2020) - [i29]Jaesung Huh, Hee Soo Heo, Jingu Kang, Shinji Watanabe, Joon Son Chung:
Augmentation adversarial training for unsupervised speaker recognition. CoRR abs/2007.12085 (2020) - [i28]Samuel Albanie, Gül Varol, Liliane Momeni, Triantafyllos Afouras, Joon Son Chung, Neil Fox, Andrew Zisserman:
BSL-1K: Scaling up co-articulated sign language recognition using mouthing cues. CoRR abs/2007.12131 (2020) - [i27]Triantafyllos Afouras, Andrew Owens, Joon Son Chung, Andrew Zisserman:
Self-Supervised Learning of Audio-Visual Objects from Video. CoRR abs/2008.04237 (2020) - [i26]Seong Min Kye, Yoohwan Kwon, Joon Son Chung:
Cross attentive pooling for speaker verification. CoRR abs/2008.05983 (2020) - [i25]Hee Soo Heo, Bong-Jin Lee, Jaesung Huh, Joon Son Chung:
Clova Baseline System for the VoxCeleb Speaker Recognition Challenge 2020. CoRR abs/2009.14153 (2020) - [i24]Jee-weon Jung, Hee-Soo Heo, Ha-Jin Yu, Joon Son Chung:
Graph Attention Networks for Speaker Verification. CoRR abs/2010.11543 (2020) - [i23]Andrew Brown, Jaesung Huh, Arsha Nagrani, Joon Son Chung, Andrew Zisserman:
Playing a Part: Speaker Verification at the Movies. CoRR abs/2010.15716 (2020) - [i22]Yoohwan Kwon, Hee-Soo Heo, Bong-Jin Lee, Joon Son Chung:
The ins and outs of speaker recognition: lessons from VoxSRC 2020. CoRR abs/2010.15809 (2020) - [i21]Seong Min Kye, Joon Son Chung, Hoirin Kim:
Supervised attention for speaker recognition. CoRR abs/2011.05189 (2020) - [i20]Youngki Kwon, Hee Soo Heo, Jaesung Huh, Bong-Jin Lee, Joon Son Chung:
Look who's not talking. CoRR abs/2011.14885 (2020) - [i19]Arsha Nagrani, Joon Son Chung, Jaesung Huh, Andrew Brown, Ernesto Coto, Weidi Xie, Mitchell McLaren, Douglas A. Reynolds, Andrew Zisserman:
VoxSRC 2020: The Second VoxCeleb Speaker Recognition Challenge. CoRR abs/2012.06867 (2020)
2010 – 2019
- 2019
- [j2]Amir Jamaludin, Joon Son Chung, Andrew Zisserman:
You Said That?: Synthesising Talking Faces from Audio. Int. J. Comput. Vis. 127(11-12): 1767-1779 (2019) - [c14]Soo-Whan Chung, Joon Son Chung, Hong-Goo Kang:
Perfect Match: Improved Cross-modal Embeddings for Audio-visual Synchronisation. ICASSP 2019: 3965-3969 - [c13]Weidi Xie, Arsha Nagrani, Joon Son Chung, Andrew Zisserman:
Utterance-level Aggregation for Speaker Recognition in the Wild. ICASSP 2019: 5791-5795 - [c12]Joon Son Chung, Bong-Jin Lee, Icksang Han:
Who Said That?: Audio-Visual Speaker Diarisation of Real-World Meetings. INTERSPEECH 2019: 371-375 - [c11]Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman:
My Lips Are Concealed: Audio-Visual Speech Enhancement Through Obstructions. INTERSPEECH 2019: 4295-4299 - [i18]Weidi Xie, Arsha Nagrani, Joon Son Chung, Andrew Zisserman:
Utterance-level Aggregation For Speaker Recognition In The Wild. CoRR abs/1902.10107 (2019) - [i17]Joon Son Chung, Bong-Jin Lee, Icksang Han:
Who said that?: Audio-visual speaker diarisation of real-world meetings. CoRR abs/1906.10042 (2019) - [i16]Joon Son Chung:
Naver at ActivityNet Challenge 2019 - Task B Active Speaker Detection (AVA). CoRR abs/1906.10555 (2019) - [i15]Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman:
My lips are concealed: Audio-visual speech enhancement through obstructions. CoRR abs/1907.04975 (2019) - [i14]Joon Son Chung, Jaesung Huh, Seongkyu Mun:
Delving into VoxCeleb: environment invariant speaker recognition. CoRR abs/1910.11238 (2019) - [i13]Seongkyu Mun, Soyeon Choe, Jaesung Huh, Joon Son Chung:
The sound of my voice: speaker representation loss for target voice separation. CoRR abs/1911.02411 (2019) - [i12]Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman:
ASR is all you need: cross-modal distillation for lip reading. CoRR abs/1911.12747 (2019) - [i11]Joon Son Chung, Arsha Nagrani, Ernesto Coto, Weidi Xie, Mitchell McLaren, Douglas A. Reynolds, Andrew Zisserman:
VoxSRC 2019: The first VoxCeleb Speaker Recognition Challenge. CoRR abs/1912.02522 (2019) - 2018
- [j1]Joon Son Chung, Andrew Zisserman:
Learning to lip read words by watching videos. Comput. Vis. Image Underst. 173: 76-85 (2018) - [c10]Joon Son Chung, Arsha Nagrani, Andrew Zisserman:
VoxCeleb2: Deep Speaker Recognition. INTERSPEECH 2018: 1086-1090 - [c9]Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman:
The Conversation: Deep Audio-Visual Speech Enhancement. INTERSPEECH 2018: 3244-3248 - [c8]Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman:
Deep Lip Reading: A Comparison of Models and an Online Application. INTERSPEECH 2018: 3514-3518 - [i10]Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman:
The Conversation: Deep Audio-Visual Speech Enhancement. CoRR abs/1804.04121 (2018) - [i9]Joon Son Chung, Arsha Nagrani, Andrew Zisserman:
VoxCeleb2: Deep Speaker Recognition. CoRR abs/1806.05622 (2018) - [i8]Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman:
Deep Lip Reading: a comparison of models and an online application. CoRR abs/1806.06053 (2018) - [i7]Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman:
LRS3-TED: a large-scale dataset for visual speech recognition. CoRR abs/1809.00496 (2018) - [i6]Triantafyllos Afouras, Joon Son Chung, Andrew W. Senior, Oriol Vinyals, Andrew Zisserman:
Deep Audio-Visual Speech Recognition. CoRR abs/1809.02108 (2018) - [i5]Soo-Whan Chung, Joon Son Chung, Hong-Goo Kang:
Perfect match: Improved cross-modal embeddings for audio-visual synchronisation. CoRR abs/1809.08001 (2018) - 2017
- [b1]Joon Son Chung:
Visual recognition of human communication. University of Oxford, UK, 2017 - [c7]Joon Son Chung, Amir Jamaludin, Andrew Zisserman:
You said that? BMVC 2017 - [c6]Joon Son Chung, Andrew Zisserman:
Lip Reading in Profile. BMVC 2017 - [c5]Joon Son Chung, Andrew W. Senior, Oriol Vinyals, Andrew Zisserman:
Lip Reading Sentences in the Wild. CVPR 2017: 3444-3453 - [c4]Arsha Nagrani, Joon Son Chung, Andrew Zisserman:
VoxCeleb: A Large-Scale Speaker Identification Dataset. INTERSPEECH 2017: 2616-2620 - [i4]Joon Son Chung, Amir Jamaludin, Andrew Zisserman:
You said that? CoRR abs/1705.02966 (2017) - [i3]Arsha Nagrani, Joon Son Chung, Andrew Zisserman:
VoxCeleb: a large-scale speaker identification dataset. CoRR abs/1706.08612 (2017) - 2016
- [c3]Joon Son Chung, Andrew Zisserman:
Lip Reading in the Wild. ACCV (2) 2016: 87-103 - [c2]Joon Son Chung, Andrew Zisserman:
Out of Time: Automated Lip Sync in the Wild. ACCV Workshops (2) 2016: 251-263 - [i2]Joon Son Chung, Andrew Zisserman:
Signs in time: Encoding human motion as a temporal image. CoRR abs/1608.02059 (2016) - [i1]Joon Son Chung, Andrew W. Senior, Oriol Vinyals, Andrew Zisserman:
Lip Reading Sentences in the Wild. CoRR abs/1611.05358 (2016) - 2014
- [c1]Joon Son Chung, Relja Arandjelovic, Giles Bergel, Alexandra Franklin, Andrew Zisserman:
Re-presentations of Art Collections. ECCV Workshops (1) 2014: 85-100
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
load content from web.archive.org
Privacy notice: By enabling the option above, your browser will contact the API of web.archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2023-04-19 23:19 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint