default search action
Brendan Jou
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c22]Krishna Somandepalli, Oliver Siy, Brendan Jou:
Relational Affect in Dyadic Interactions. CHI Extended Abstracts 2024: 526:1-526:9 - [i14]Gwanghyun Kim, Alonso Martinez, Yu-Chuan Su, Brendan Jou, José Lezama, Agrim Gupta, Lijun Yu, Lu Jiang, Aren Jansen, Jacob Walker, Krishna Somandepalli:
A Versatile Diffusion Transformer with Mixture of Noise Levels for Audiovisual Generation. CoRR abs/2405.13762 (2024) - 2023
- [c21]Taesik Gong, Josh Belanich, Krishna Somandepalli, Arsha Nagrani, Brian Eoff, Brendan Jou:
LanSER: Language-Model Supported Speech Emotion Recognition. INTERSPEECH 2023: 2408-2412 - [i13]Taesik Gong, Josh Belanich, Krishna Somandepalli, Arsha Nagrani, Brian Eoff, Brendan Jou:
LanSER: Language-Model Supported Speech Emotion Recognition. CoRR abs/2309.03978 (2023) - 2022
- [c20]Krishna Somandepalli, Hang Qi, Brian Eoff, Alan Cowen, Kartik Audhkhasi, Josh Belanich, Brendan Jou:
Federated Learning for Affective Computing Tasks. ACII 2022: 1-8 - [c19]Asma Ghandeharioun, Been Kim, Chun-Liang Li, Brendan Jou, Brian Eoff, Rosalind W. Picard:
DISSECT: Disentangled Simultaneous Explanations via Concept Traversals. ICLR 2022 - [i12]Josh Belanich, Krishna Somandepalli, Brian Eoff, Brendan Jou:
Multitask vocal burst modeling with ResNets and pre-trained paralinguistic Conformers. CoRR abs/2206.12494 (2022) - 2021
- [c18]Mingda Zhang, Chun-Te Chu, Andrey Zhmoginov, Andrew Howard, Brendan Jou, Yukun Zhu, Li Zhang, Rebecca Hwa, Adriana Kovashka:
BasisNet: Two-Stage Model Synthesis for Efficient Inference. CVPR Workshops 2021: 3081-3090 - [i11]Mingda Zhang, Chun-Te Chu, Andrey Zhmoginov, Andrew G. Howard, Brendan Jou, Yukun Zhu, Li Zhang, Rebecca Hwa, Adriana Kovashka:
BasisNet: Two-stage Model Synthesis for Efficient Inference. CoRR abs/2105.03014 (2021) - [i10]Asma Ghandeharioun, Been Kim, Chun-Liang Li, Brendan Jou, Brian Eoff, Rosalind W. Picard:
DISSECT: Disentangled Simultaneous Explanations via Concept Traversals. CoRR abs/2105.15164 (2021)
2010 – 2019
- 2019
- [c17]Asma Ghandeharioun, Brian Eoff, Brendan Jou, Rosalind W. Picard:
Characterizing Sources of Uncertainty to Proxy Calibration and Disambiguate Annotator and Data Bias. ICCV Workshops 2019: 4202-4206 - [i9]Asma Ghandeharioun, Brian Eoff, Brendan Jou, Rosalind W. Picard:
Characterizing Sources of Uncertainty to Proxy Calibration and Disambiguate Annotator and Data Bias. CoRR abs/1909.09285 (2019) - 2018
- [c16]Víctor Campos, Brendan Jou, Xavier Giró-i-Nieto, Jordi Torres, Shih-Fu Chang:
Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks. ICLR (Poster) 2018 - 2017
- [j3]Nikolaos Pappas, Miriam Redi, Mercan Topkara, Hongyi Liu, Brendan Jou, Tao Chen, Shih-Fu Chang:
Multilingual visual sentiment concept clustering and analysis. Int. J. Multim. Inf. Retr. 6(1): 51-70 (2017) - [j2]Mohammad Soleymani, David García, Brendan Jou, Björn W. Schuller, Shih-Fu Chang, Maja Pantic:
A survey of multimodal sentiment analysis. Image Vis. Comput. 65: 3-14 (2017) - [j1]Victor Campos, Brendan Jou, Xavier Giró-i-Nieto:
From pixels to sentiment: Fine-tuning CNNs for visual sentiment prediction. Image Vis. Comput. 65: 15-22 (2017) - [c15]Delia Fernandez, Alejandro Woodward, Victor Campos, Xavier Giró-i-Nieto, Brendan Jou, Shih-Fu Chang:
More Cat than Cute?: Interpretable Prediction of Adjective-Noun Pairs. MUSA2@MM 2017: 61-69 - [i8]Delia Fernandez, Alejandro Woodward, Victor Campos, Xavier Giró-i-Nieto, Brendan Jou, Shih-Fu Chang:
More cat than cute? Interpretable Prediction of Adjective-Noun Pairs. CoRR abs/1708.06039 (2017) - [i7]Victor Campos, Brendan Jou, Xavier Giró-i-Nieto, Jordi Torres, Shih-Fu Chang:
Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks. CoRR abs/1708.06834 (2017) - 2016
- [b1]Brendan Jou:
Large-scale Affective Computing for Visual Multimedia. Columbia University, USA, 2016 - [c14]Nikolaos Pappas, Miriam Redi, Mercan Topkara, Brendan Jou, Hongyi Liu, Tao Chen, Shih-Fu Chang:
Multilingual Visual Sentiment Concept Matching. ICMR 2016: 151-158 - [c13]Brendan Jou, Margaret Yuying Qian, Shih-Fu Chang:
SentiCart: Cartography and Geo-contextualization for Multilingual Visual Sentiment. ICMR 2016: 389-392 - [c12]Hongyi Liu, Brendan Jou, Tao Chen, Mercan Topkara, Nikolaos Pappas, Miriam Redi, Shih-Fu Chang:
Complura: Exploring and Leveraging a Large-scale Multilingual Visual Sentiment Ontology. ICMR 2016: 417-420 - [c11]Brendan Jou, Shih-Fu Chang:
Deep Cross Residual Learning for Multitask Visual Recognition. ACM Multimedia 2016: 998-1007 - [c10]Bingchen Gong, Brendan Jou, Felix X. Yu, Shih-Fu Chang:
Tamp: A Library for Compact Deep Neural Networks with Structured Matrices. ACM Multimedia 2016: 1206-1209 - [i6]Brendan Jou, Shih-Fu Chang:
Deep Cross Residual Learning for Multitask Visual Recognition. CoRR abs/1604.01335 (2016) - [i5]Victor Campos, Brendan Jou, Xavier Giró-i-Nieto:
From Pixels to Sentiment: Fine-tuning CNNs for Visual Sentiment Prediction. CoRR abs/1604.03489 (2016) - [i4]Brendan Jou, Shih-Fu Chang:
Going Deeper for Multilingual Visual Sentiment Detection. CoRR abs/1605.09211 (2016) - [i3]Nikolaos Pappas, Miriam Redi, Mercan Topkara, Brendan Jou, Hongyi Liu, Tao Chen, Shih-Fu Chang:
Multilingual Visual Sentiment Concept Matching. CoRR abs/1606.02276 (2016) - 2015
- [c9]Victor Campos, Amaia Salvador, Xavier Giró-i-Nieto, Brendan Jou:
Diving Deep into Sentiment: Understanding Fine-tuned CNNs for Visual Sentiment Prediction. ASM@ACM Multimedia 2015: 57-62 - [c8]Brendan Jou, Tao Chen, Nikolaos Pappas, Miriam Redi, Mercan Topkara, Shih-Fu Chang:
Visual Affect Around the World: A Large-scale Multilingual Visual Sentiment Ontology. ACM Multimedia 2015: 159-168 - [i2]Brendan Jou, Tao Chen, Nikolaos Pappas, Miriam Redi, Mercan Topkara, Shih-Fu Chang:
Visual Affect Around the World: A Large-scale Multilingual Visual Sentiment Ontology. CoRR abs/1508.03868 (2015) - [i1]Victor Campos, Amaia Salvador, Brendan Jou, Xavier Giró-i-Nieto:
Diving Deep into Sentiment: Understanding Fine-tuned CNNs for Visual Sentiment Prediction. CoRR abs/1508.05056 (2015) - 2014
- [c7]Joseph G. Ellis, Brendan Jou, Shih-Fu Chang:
Why We Watch the News: A Dataset for Exploring Sentiment in Broadcast Video News. ICMI 2014: 104-111 - [c6]Brendan Jou, Subhabrata Bhattacharya, Shih-Fu Chang:
Predicting Viewer Perceived Emotions in Animated GIFs. ACM Multimedia 2014: 213-216 - 2013
- [c5]Xin Guo, Dong Liu, Brendan Jou, Mojun Zhu, Anni Cai, Shih-Fu Chang:
Robust Object Co-detection. CVPR 2013: 3206-3213 - [c4]Brendan Jou, Hongzhi Li, Joseph G. Ellis, Daniel Morozoff-Abegauz, Shih-Fu Chang:
Structured exploration of who, what, when, and where in heterogeneous multimedia news sources. ACM Multimedia 2013: 357-360 - [c3]Hongzhi Li, Brendan Jou, Joseph G. Ellis, Daniel Morozoff, Shih-Fu Chang:
News rover: exploring topical structures and serendipity in heterogeneous multimedia news. ACM Multimedia 2013: 449-450 - 2011
- [c2]Si Ying Diana Hu, Brendan Jou, Aaron Jaech, Marios Savvides:
Fusion of region-based representations for gender identification. IJCB 2011: 1-7 - 2010
- [c1]Jameson Merkow, Brendan Jou, Marios Savvides:
An exploration of gender identification using only the periocular region. BTAS 2010: 1-5
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-09-05 02:11 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint