default search action
Andrew Owens
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c44]Ziyang Chen, Israel D. Gebru, Christian Richardt, Anurag Kumar, William Laney, Andrew Owens, Alexander Richard:
Real Acoustic Fields: An Audio-Visual Room Acoustics Dataset and Benchmark. CVPR 2024: 21886-21896 - [c43]Daniel Geng, Inbum Park, Andrew Owens:
Visual Anagrams: Generating Multi-View Optical Illusions with Diffusion Models. CVPR 2024: 24154-24163 - [c42]Fengyu Yang, Chao Feng, Ziyang Chen, Hyoungseob Park, Daniel Wang, Yiming Dou, Ziyao Zeng, Xien Chen, Rit Gangopadhyay, Andrew Owens, Alex Wong:
Binding Touch to Everything: Learning Unified Multimodal Tactile Representations. CVPR 2024: 26330-26343 - [c41]Yiming Dou, Fengyu Yang, Yi Liu, Antonio Loquercio, Andrew Owens:
Tactile-Augmented Radiance Fields. CVPR 2024: 26519-26529 - [c40]Zihao Wei, Zixuan Pan, Andrew Owens:
Efficient Vision-Language Pre-Training by Cluster Masking. CVPR 2024: 26805-26815 - [c39]Daniel Geng, Inbum Park, Andrew Owens:
Factorized Diffusion: Perceptual Illusions by Noise Decomposition. ECCV (57) 2024: 366-384 - [c38]Daniel Geng, Andrew Owens:
Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators. ICLR 2024 - [i44]Fengyu Yang, Chao Feng, Ziyang Chen, Hyoungseob Park, Daniel Wang, Yiming Dou, Ziyao Zeng, Xien Chen, Rit Gangopadhyay, Andrew Owens, Alex Wong:
Binding Touch to Everything: Learning Unified Multimodal Tactile Representations. CoRR abs/2401.18084 (2024) - [i43]Daniel Geng, Andrew Owens:
Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators. CoRR abs/2401.18085 (2024) - [i42]Ziyang Chen, Israel D. Gebru, Christian Richardt, Anurag Kumar, William Laney, Andrew Owens, Alexander Richard:
Real Acoustic Fields: An Audio-Visual Room Acoustics Dataset and Benchmark. CoRR abs/2403.18821 (2024) - [i41]Daniel Geng, Inbum Park, Andrew Owens:
Factorized Diffusion: Perceptual Illusions by Noise Decomposition. CoRR abs/2404.11615 (2024) - [i40]Yiming Dou, Fengyu Yang, Yi Liu, Antonio Loquercio, Andrew Owens:
Tactile-Augmented Radiance Fields. CoRR abs/2405.04534 (2024) - [i39]Zihao Wei, Zixuan Pan, Andrew Owens:
Efficient Vision-Language Pre-training by Cluster Masking. CoRR abs/2405.08815 (2024) - [i38]Ziyang Chen, Daniel Geng, Andrew Owens:
Images that Sound: Composing Images and Sounds on a Single Canvas. CoRR abs/2405.12221 (2024) - [i37]Samanta Rodriguez, Yiming Dou, Miquel Oller, Andrew Owens, Nima Fazeli:
Touch2Touch: Cross-Modal Tactile Generation for Object Manipulation. CoRR abs/2409.08269 (2024) - [i36]Tingle Li, Renhao Wang, Po-Yao Huang, Andrew Owens, Gopala Anumanchipalli:
Self-Supervised Audio-Visual Soundscape Stylization. CoRR abs/2409.14340 (2024) - [i35]Sikai Li, Samanta Rodriguez, Yiming Dou, Andrew Owens, Nima Fazeli:
Tactile Functasets: Neural Implicit Representations of Tactile Datasets. CoRR abs/2409.14592 (2024) - [i34]Ayush Shrivastava, Andrew Owens:
Self-Supervised Any-Point Tracking by Contrastive Random Walks. CoRR abs/2409.16288 (2024) - 2023
- [j9]Jiatian Sun, Longxiulin Deng, Triantafyllos Afouras, Andrew Owens, Abe Davis:
Eventfulness for Interactive Video Alignment. ACM Trans. Graph. 42(4): 46:1-46:10 (2023) - [c37]Yuexi Du, Ziyang Chen, Justin Salamon, Bryan Russell, Andrew Owens:
Conditional Generation of Audio from Video via Foley Analogies. CVPR 2023: 2426-2436 - [c36]Rui Guo, Jasmine Collins, Oscar de Lima, Andrew Owens:
GANmouflage: 3D Object Nondetection with Texture Fields. CVPR 2023: 4702-4712 - [c35]Kim Sung-Bin, Arda Senocak, Hyunwoo Ha, Andrew Owens, Tae-Hyun Oh:
Sound to Visual Scene Generation by Audio-to-Visual Latent Alignment. CVPR 2023: 6430-6440 - [c34]Chenhao Zheng, Ayush Shrivastava, Andrew Owens:
EXIF as Language: Learning Cross-Modal Associations between Images and Camera Metadata. CVPR 2023: 6945-6956 - [c33]Chao Feng, Ziyang Chen, Andrew Owens:
Self-Supervised Video Forensics by Audio-Visual Anomaly Detection. CVPR 2023: 10491-10503 - [c32]Ziyang Chen, Shengyi Qian, Andrew Owens:
Sound Localization from Motion: Jointly Learning Sound Direction and Camera Rotation. ICCV 2023: 7863-7874 - [c31]Lukas Höllein, Ang Cao, Andrew Owens, Justin Johnson, Matthias Nießner:
Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models. ICCV 2023: 7875-7886 - [c30]Fengyu Yang, Jiacheng Zhang, Andrew Owens:
Generating Visual Scenes from Touch. ICCV 2023: 22013-22023 - [c29]Zhaoying Pan, Daniel Geng, Andrew Owens:
Self-Supervised Motion Magnification by Backpropagating Through Optical Flow. NeurIPS 2023 - [i33]Chao Feng, Ziyang Chen, Andrew Owens:
Self-Supervised Video Forensics by Audio-Visual Anomaly Detection. CoRR abs/2301.01767 (2023) - [i32]Chenhao Zheng, Ayush Shrivastava, Andrew Owens:
EXIF as Language: Learning Cross-Modal Associations Between Images and Camera Metadata. CoRR abs/2301.04647 (2023) - [i31]Ziyang Chen, Shengyi Qian, Andrew Owens:
Sound Localization from Motion: Jointly Learning Sound Direction and Camera Rotation. CoRR abs/2303.11329 (2023) - [i30]Lukas Höllein, Ang Cao, Andrew Owens, Justin Johnson, Matthias Nießner:
Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models. CoRR abs/2303.11989 (2023) - [i29]Kim Sung-Bin, Arda Senocak, Hyunwoo Ha, Andrew Owens, Tae-Hyun Oh:
Sound to Visual Scene Generation by Audio-to-Visual Latent Alignment. CoRR abs/2303.17490 (2023) - [i28]Yuexi Du, Ziyang Chen, Justin Salamon, Bryan Russell, Andrew Owens:
Conditional Generation of Audio from Video via Foley Analogies. CoRR abs/2304.08490 (2023) - [i27]Fengyu Yang, Jiacheng Zhang, Andrew Owens:
Generating Visual Scenes from Touch. CoRR abs/2309.15117 (2023) - [i26]Zhaoying Pan, Daniel Geng, Andrew Owens:
Self-Supervised Motion Magnification by Backpropagating Through Optical Flow. CoRR abs/2311.17056 (2023) - [i25]Daniel Geng, Inbum Park, Andrew Owens:
Visual Anagrams: Generating Multi-View Optical Illusions with Diffusion Models. CoRR abs/2311.17919 (2023) - 2022
- [c28]Artem Abzaliev, Andrew Owens, Rada Mihalcea:
Towards Understanding the Relation between Gestures and Language. COLING 2022: 5507-5520 - [c27]Daniel Geng, Max Hamilton, Andrew Owens:
Comparing Correspondences: Video Prediction with Correspondence-wise Losses. CVPR 2022: 3355-3366 - [c26]Zhangxing Bian, Allan Jabri, Alexei A. Efros, Andrew Owens:
Learning Pixel Trajectories with Multiscale Contrastive Random Walks. CVPR 2022: 6498-6509 - [c25]Xixi Hu, Ziyang Chen, Andrew Owens:
Mix and Localize: Localizing Sound Sources in Mixtures. CVPR 2022: 10473-10482 - [c24]Tingle Li, Yichen Liu, Andrew Owens, Hang Zhao:
Learning Visual Styles from Audio-Visual Associations. ECCV (37) 2022: 235-252 - [c23]Ziyang Chen, David F. Fouhey, Andrew Owens:
Sound Localization by Self-supervised Time Delay Estimation. ECCV (26) 2022: 489-508 - [c22]Fengyu Yang, Chenyang Ma, Jiacheng Zhang, Jing Zhu, Wenzhen Yuan, Andrew Owens:
Touch and Go: Learning from Human-Collected Vision and Touch. NeurIPS 2022 - [c21]Medhini Narasimhan, Shiry Ginosar, Andrew Owens, Alexei A. Efros, Trevor Darrell:
Strumming to the Beat: Audio-Conditioned Contrastive Video Textures. WACV 2022: 507-516 - [i24]Rui Guo, Jasmine Collins, Oscar de Lima, Andrew Owens:
GANmouflage: 3D Object Nondetection with Texture Fields. CoRR abs/2201.07202 (2022) - [i23]Zhangxing Bian, Allan Jabri, Alexei A. Efros, Andrew Owens:
Learning Pixel Trajectories with Multiscale Contrastive Random Walks. CoRR abs/2201.08379 (2022) - [i22]Ziyang Chen, David F. Fouhey, Andrew Owens:
Sound Localization by Self-Supervised Time Delay Estimation. CoRR abs/2204.12489 (2022) - [i21]Tingle Li, Yichen Liu, Andrew Owens, Hang Zhao:
Learning Visual Styles from Audio-Visual Associations. CoRR abs/2205.05072 (2022) - [i20]Fengyu Yang, Chenyang Ma, Jiacheng Zhang, Jing Zhu, Wenzhen Yuan, Andrew Owens:
Touch and Go: Learning from Human-Collected Vision and Touch. CoRR abs/2211.12498 (2022) - [i19]Xixi Hu, Ziyang Chen, Andrew Owens:
Mix and Localize: Localizing Sound Sources in Mixtures. CoRR abs/2211.15058 (2022) - 2021
- [j8]Lee Ringham, Andrew Owens, Mikolaj Cieslak, Lawrence D. Harder, Przemyslaw Prusinkiewicz:
Modeling flower pigmentation patterns. ACM Trans. Graph. 40(6): 233:1-233:14 (2021) - [c20]Ziyang Chen, Xixi Hu, Andrew Owens:
Structure from Silence: Learning Scene Structure from Ambient Sound. CoRL 2021: 760-772 - [c19]Linyi Jin, Shengyi Qian, Andrew Owens, David F. Fouhey:
Planar Surface Reconstruction from Sparse Views. ICCV 2021: 12971-12980 - [i18]Linyi Jin, Shengyi Qian, Andrew Owens, David F. Fouhey:
Planar Surface Reconstruction from Sparse Views. CoRR abs/2103.14644 (2021) - [i17]Medhini Narasimhan, Shiry Ginosar, Andrew Owens, Alexei A. Efros, Trevor Darrell:
Strumming to the Beat: Audio-Conditioned Contrastive Video Textures. CoRR abs/2104.02687 (2021) - [i16]Daniel Geng, Andrew Owens:
Comparing Correspondences: Video Prediction with Correspondence-wise Losses. CoRR abs/2104.09498 (2021) - [i15]Ziyang Chen, Xixi Hu, Andrew Owens:
Structure from Silence: Learning Scene Structure from Ambient Sound. CoRR abs/2111.05846 (2021) - 2020
- [j7]Donglei Yang, Joshua Carlson, Andrew Owens, K. E. Perry, Inne Singgih, Zi-Xia Song, Fangfang Zhang, Xiaohong Zhang:
Antimagic orientations of graphs with large maximum degree. Discret. Math. 343(12): 112123 (2020) - [c18]Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, Alexei A. Efros:
CNN-Generated Images Are Surprisingly Easy to Spot... for Now. CVPR 2020: 8692-8701 - [c17]Triantafyllos Afouras, Andrew Owens, Joon Son Chung, Andrew Zisserman:
Self-supervised Learning of Audio-Visual Objects from Video. ECCV (18) 2020: 208-224 - [c16]Allan Jabri, Andrew Owens, Alexei A. Efros:
Space-Time Correspondence as a Contrastive Random Walk. NeurIPS 2020 - [i14]Allan Jabri, Andrew Owens, Alexei A. Efros:
Space-Time Correspondence as a Contrastive Random Walk. CoRR abs/2006.14613 (2020) - [i13]Triantafyllos Afouras, Andrew Owens, Joon Son Chung, Andrew Zisserman:
Self-Supervised Learning of Audio-Visual Objects from Video. CoRR abs/2008.04237 (2020)
2010 – 2019
- 2019
- [j6]Dean Hoffman, Paul Horn, Peter D. Johnson Jr., Andrew Owens:
On Rainbow-Cycle-Forbidding Edge Colorings of Finite Graphs. Graphs Comb. 35(6): 1585-1596 (2019) - [j5]Tianfan Xue, Andrew Owens, Daniel Scharstein, Michael Goesele, Richard Szeliski:
Multi-frame stereo matching with edges, planes, and superpixels. Image Vis. Comput. 91 (2019) - [c15]Shiry Ginosar, Amir Bar, Gefen Kohavi, Caroline Chan, Andrew Owens, Jitendra Malik:
Learning Individual Styles of Conversational Gesture. CVPR 2019: 3497-3506 - [c14]Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, Alexei A. Efros:
Detecting Photoshopped Faces by Scripting Photoshop. ICCV 2019: 10071-10080 - [i12]Shiry Ginosar, Amir Bar, Gefen Kohavi, Caroline Chan, Andrew Owens, Jitendra Malik:
Learning Individual Styles of Conversational Gesture. CoRR abs/1906.04160 (2019) - [i11]Sheng-Yu Wang, Oliver Wang, Andrew Owens, Richard Zhang, Alexei A. Efros:
Detecting Photoshopped Faces by Scripting Photoshop. CoRR abs/1906.05856 (2019) - [i10]Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, Alexei A. Efros:
CNN-generated images are surprisingly easy to spot... for now. CoRR abs/1912.11035 (2019) - 2018
- [j4]Andrew Owens, Jiajun Wu, Josh H. McDermott, William T. Freeman, Antonio Torralba:
Learning Sight from Sound: Ambient Sound Provides Supervision for Visual Learning. Int. J. Comput. Vis. 126(10): 1120-1137 (2018) - [j3]Roberto Calandra, Andrew Owens, Dinesh Jayaraman, Justin Lin, Wenzhen Yuan, Jitendra Malik, Edward H. Adelson, Sergey Levine:
More Than a Feeling: Learning to Grasp and Regrasp Using Vision and Touch. IEEE Robotics Autom. Lett. 3(4): 3300-3307 (2018) - [c13]Minyoung Huh, Andrew Liu, Andrew Owens, Alexei A. Efros:
Fighting Fake News: Image Splice Detection via Learned Self-Consistency. ECCV (11) 2018: 106-124 - [c12]Andrew Owens, Alexei A. Efros:
Audio-Visual Scene Analysis with Self-Supervised Multisensory Features. ECCV (6) 2018: 639-658 - [c11]Xiuming Zhang, Tali Dekel, Tianfan Xue, Andrew Owens, Qiurui He, Jiajun Wu, Stefanie Mueller, William T. Freeman:
MoSculp: Interactive Visualization of Shape and Time. UIST 2018: 275-285 - [i9]Andrew Owens, Alexei A. Efros:
Audio-Visual Scene Analysis with Self-Supervised Multisensory Features. CoRR abs/1804.03641 (2018) - [i8]Minyoung Huh, Andrew Liu, Andrew Owens, Alexei A. Efros:
Fighting Fake News: Image Splice Detection via Learned Self-Consistency. CoRR abs/1805.04096 (2018) - [i7]Roberto Calandra, Andrew Owens, Dinesh Jayaraman, Justin Lin, Wenzhen Yuan, Jitendra Malik, Edward H. Adelson, Sergey Levine:
More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch. CoRR abs/1805.11085 (2018) - [i6]Xiuming Zhang, Tali Dekel, Tianfan Xue, Andrew Owens, Qiurui He, Jiajun Wu, Stefanie Mueller, William T. Freeman:
MoSculp: Interactive Visualization of Shape and Time. CoRR abs/1809.05491 (2018) - 2017
- [c10]Roberto Calandra, Andrew Owens, Manu Upadhyaya, Wenzhen Yuan, Justin Lin, Edward H. Adelson, Sergey Levine:
The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes? CoRL 2017: 314-323 - [c9]Wenzhen Yuan, Chenzhuo Zhu, Andrew Owens, Mandayam A. Srinivasan, Edward H. Adelson:
Shape-independent hardness estimation using deep learning and a GelSight tactile sensor. ICRA 2017: 951-958 - [i5]Wenzhen Yuan, Chenzhuo Zhu, Andrew Owens, Mandayam A. Srinivasan, Edward H. Adelson:
Shape-independent Hardness Estimation Using Deep Learning and a GelSight Tactile Sensor. CoRR abs/1704.03955 (2017) - [i4]Roberto Calandra, Andrew Owens, Manu Upadhyaya, Wenzhen Yuan, Justin Lin, Edward H. Adelson, Sergey Levine:
The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes? CoRR abs/1710.05512 (2017) - [i3]Andrew Owens, Jiajun Wu, Josh H. McDermott, William T. Freeman, Antonio Torralba:
Learning Sight from Sound: Ambient Sound Provides Supervision for Visual Learning. CoRR abs/1712.07271 (2017) - 2016
- [b1]Andrew Owens:
Learning visual models from paired audio-visual examples. Massachusetts Institute of Technology, Cambridge, USA, 2016 - [j2]Andrew Owens, Mikolaj Cieslak, Jeremy Hart, Regine Classen-Bockhoff, Przemyslaw Prusinkiewicz:
Modeling dense inflorescences. ACM Trans. Graph. 35(4): 136:1-136:14 (2016) - [c8]Abdulaziz Khiyami, Andrew Owens, Abdelkrim Doufene, Adnan Alsaati, Olivier L. de Weck:
Assessment of Resilience in Desalination Infrastructure Using Semi-Markov Models. CSDM 2016: 125-140 - [c7]Andrew Owens, Phillip Isola, Josh H. McDermott, Antonio Torralba, Edward H. Adelson, William T. Freeman:
Visually Indicated Sounds. CVPR 2016: 2405-2413 - [c6]Andrew Owens, Jiajun Wu, Josh H. McDermott, William T. Freeman, Antonio Torralba:
Ambient Sound Provides Supervision for Visual Learning. ECCV (1) 2016: 801-816 - [i2]Andrew Owens, Jiajun Wu, Josh H. McDermott, William T. Freeman, Antonio Torralba:
Ambient Sound Provides Supervision for Visual Learning. CoRR abs/1608.07017 (2016) - 2015
- [i1]Andrew Owens, Phillip Isola, Josh H. McDermott, Antonio Torralba, Edward H. Adelson, William T. Freeman:
Visually Indicated Sounds. CoRR abs/1512.08512 (2015) - 2014
- [c5]Andrew Owens, Connelly Barnes, Alex Flint, Hanumant Singh, William T. Freeman:
Camouflaging an Object from Many Viewpoints. CVPR 2014: 2782-2789 - 2013
- [j1]David J. Crandall, Andrew Owens, Noah Snavely, Daniel P. Huttenlocher:
SfM with MRFs: Discrete-Continuous Optimization for Large-Scale Structure from Motion. IEEE Trans. Pattern Anal. Mach. Intell. 35(12): 2841-2853 (2013) - [c4]Andrew Owens, Jianxiong Xiao, Antonio Torralba, William T. Freeman:
Shape Anchors for Data-Driven Multi-view Reconstruction. ICCV 2013: 33-40 - [c3]Jianxiong Xiao, Andrew Owens, Antonio Torralba:
SUN3D: A Database of Big Spaces Reconstructed Using SfM and Object Labels. ICCV 2013: 1625-1632 - 2011
- [c2]David J. Crandall, Andrew Owens, Noah Snavely, Dan Huttenlocher:
Discrete-continuous optimization for large-scale structure from motion. CVPR 2011: 3001-3008
2000 – 2009
- 2008
- [c1]Ananda Gunawardena, John Barr, Andrew Owens:
A method for analyzing reading comprehension in computer science courses. ITiCSE 2008: 348
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-23 21:22 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint