default search action
Yusuke Sugano
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j19]Tomoya Sato, Yusuke Sugano, Yoichi Sato:
Direction-of-Arrival Estimation for Mobile Agents Utilizing the Relationship Between Agent's Trajectory and Binaural Audio. IEEE Access 12: 75508-75519 (2024) - [j18]Atsushi Takada, Wataru Kawabe, Yusuke Sugano:
Example-Based Conditioning for Text-to-Image Generative Models. IEEE Access 12: 162191-162203 (2024) - [j17]Wataru Kawabe, Yuri Nakao, Akihisa Shitara, Yusuke Sugano:
Technical Understanding from Interactive Machine Learning Experience: a Study Through a Public Event for Science Museum Visitors. Interact. Comput. 36(3): 155-171 (2024) - [j16]Wataru Kawabe, Yusuke Sugano:
Image-to-Text Translation for Interactive Image Recognition: A Comparative User Study with Non-expert Users. J. Inf. Process. 32: 358-368 (2024) - [c49]Wataru Kawabe, Yusuke Sugano:
A Multimodal LLM-based Assistant for User-Centric Interactive Machine Learning. SIGGRAPH Asia Posters 2024: 7:1-7:2 - [c48]Yoichiro Hisadome, Tianyi Wu, Jiawei Qin, Yusuke Sugano:
Rotation-Constrained Cross-View Feature Fusion for Multi-View Appearance-based Gaze Estimation. WACV 2024: 5973-5982 - [e1]Mohamed Khamis, Yusuke Sugano, Ludwig Sidenmark:
Proceedings of the 2024 Symposium on Eye Tracking Research and Applications, ETRA 2024, Glasgow, United Kingdom, June 4-7, 2024. ACM 2024 [contents] - [i22]Mingtao Yue, Tomomi Sayuda, Miles Pennington, Yusuke Sugano:
Evaluating User Experience and Data Quality in a Gamified Data Collection for Appearance-Based Gaze Estimation. CoRR abs/2401.14095 (2024) - 2023
- [j15]Hiroaki Minoura, Tsubasa Hirakawa, Yusuke Sugano, Takayoshi Yamashita, Hironobu Fujiyoshi:
Utilizing Human Social Norms for Multimodal Trajectory Forecasting via Group-Based Forecasting Module. IEEE Trans. Intell. Veh. 8(1): 836-850 (2023) - [i21]Wataru Kawabe, Yuri Nakao, Akihisa Shitara, Yusuke Sugano:
Technical Understanding from IML Hands-on Experience: A Study through a Public Event for Science Museum Visitors. CoRR abs/2305.05846 (2023) - [i20]Wataru Kawabe, Yusuke Sugano:
Image-to-Text Translation for Interactive Image Recognition: A Comparative User Study with Non-Expert Users. CoRR abs/2305.06641 (2023) - [i19]Yoichiro Hisadome, Tianyi Wu, Jiawei Qin, Yusuke Sugano:
Rotation-Constrained Cross-View Feature Fusion for Multi-View Appearance-based Gaze Estimation. CoRR abs/2305.12704 (2023) - [i18]Jiawei Qin, Takuru Shimoyama, Xucong Zhang, Yusuke Sugano:
Domain-Adaptive Full-Face Gaze Estimation via Novel-View-Synthesis and Feature Disentanglement. CoRR abs/2305.16140 (2023) - 2022
- [j14]Tomoya Sato, Yusuke Sugano, Yoichi Sato:
Self-Supervised Learning for Audio-Visual Relationships of Videos With Stereo Sounds. IEEE Access 10: 94273-94284 (2022) - [j13]Tianyi Liu, Yusuke Sugano:
Interactive Machine Learning on Edge Devices With User-in-the-Loop Sample Recommendation. IEEE Access 10: 107346-107360 (2022) - [j12]Hiroaki Santo, Masaki Samejima, Yusuke Sugano, Boxin Shi, Yasuyuki Matsushita:
Deep Photometric Stereo Networks for Determining Surface Normal and Reflectances. IEEE Trans. Pattern Anal. Mach. Intell. 44(1): 114-128 (2022) - [c47]Tianyi Wu, Yusuke Sugano:
Learning Video-Independent Eye Contact Segmentation from In-the-Wild Videos. ACCV (4) 2022: 52-70 - [c46]Jiawei Qin, Takuru Shimoyama, Yusuke Sugano:
Learning-by-Novel-View-Synthesis for Full-Face Appearance-Based 3D Gaze Estimation. CVPR Workshops 2022: 4977-4987 - [c45]Lijin Yang, Yifei Huang, Yusuke Sugano, Yoichi Sato:
Interact before Align: Leveraging Cross-Modal Knowledge for Domain Adaptive Action Recognition. CVPR 2022: 14702-14712 - [c44]Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Abrham Gebreselasie, Cristina González, James Hillis, Xuhua Huang, Yifei Huang, Wenqi Jia, Weslie Khoo, Jáchym Kolár, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz Puentes, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Ziwei Zhao, Yunyi Zhu, Pablo Arbeláez, David Crandall, Dima Damen, Giovanni Maria Farinella, Christian Fuegen, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard A. Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik:
Ego4D: Around the World in 3, 000 Hours of Egocentric Video. CVPR 2022: 18973-18990 - [i17]Jiawei Qin, Takuru Shimoyama, Yusuke Sugano:
Learning-by-Novel-View-Synthesis for Full-Face Appearance-based 3D Gaze Estimation. CoRR abs/2201.07927 (2022) - [i16]Tianyi Wu, Yusuke Sugano:
Learning Video-independent Eye Contact Segmentation from In-the-Wild Videos. CoRR abs/2210.02033 (2022) - 2021
- [c43]Lijin Yang, Yifei Huang, Yusuke Sugano, Yoichi Sato:
Stacked Temporal Attention: Improving First-person Action Recognition by Emphasizing Discriminative Clips. BMVC 2021: 240 - [c42]Bektur Ryskeldiev, Yoichi Ochiai, Koki Kusano, Jie Li, MHD Yamen Saraiji, Kai Kunze, Mark Billinghurst, Suranga Nanayakkara, Yusuke Sugano, Tatsuya Honda:
Immersive Inclusivity at CHI: Design and Creation of Inclusive User Interactions Through Immersive Media. CHI Extended Abstracts 2021: 78:1-78:4 - [i15]Haruya Sakashita, Christoph Flothow, Noriko Takemura, Yusuke Sugano:
DRIV100: In-The-Wild Multi-Domain Dataset and Evaluation for Real-World Domain Adaptation of Semantic Segmentation. CoRR abs/2102.00150 (2021) - [i14]Lijin Yang, Yifei Huang, Yusuke Sugano, Yoichi Sato:
EPIC-KITCHENS-100 Unsupervised Domain Adaptation Challenge for Action Recognition 2021: Team M3EM Technical Report. CoRR abs/2106.10026 (2021) - [i13]Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Christian Fuegen, Abrham Gebreselasie, Cristina González, James Hillis, Xuhua Huang, Yifei Huang, Wenqi Jia, Weslie Khoo, Jáchym Kolár, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz Puentes, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Yunyi Zhu, Pablo Arbeláez, David Crandall, Dima Damen, Giovanni Maria Farinella, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard A. Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik:
Ego4D: Around the World in 3, 000 Hours of Egocentric Video. CoRR abs/2110.07058 (2021) - [i12]Lijin Yang, Yifei Huang, Yusuke Sugano, Yoichi Sato:
Stacked Temporal Attention: Improving First-person Action Recognition by Emphasizing Discriminative Clips. CoRR abs/2112.01038 (2021) - 2020
- [j11]Hiroaki Santo, Michael Waechter, Wen-Yan Lin, Yusuke Sugano, Yasuyuki Matsushita:
Light Structure from Pin Motion: Geometric Point Light Source Calibration. Int. J. Comput. Vis. 128(7): 1889-1912 (2020) - [c41]Xucong Zhang, Yusuke Sugano, Andreas Bulling, Otmar Hilliges:
Learning-based Region Selection for End-to-End Gaze Estimation. BMVC 2020 - [c40]Yifei Huang, Yusuke Sugano, Yoichi Sato:
Improving Action Segmentation via Graph-Based Temporal Reasoning. CVPR 2020: 14021-14031 - [c39]Tatsuya Ishibashi, Yuri Nakao, Yusuke Sugano:
Investigating audio data visualization for interactive sound recognition. IUI 2020: 67-77 - [c38]Yuri Nakao, Yusuke Sugano:
Use of Machine Learning by Non-Expert DHH People: Technological Understanding and Sound Perception. NordiCHI 2020: 82:1-82:12
2010 – 2019
- 2019
- [j10]Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling:
MPIIGaze: Real-World Dataset and Deep Appearance-Based Gaze Estimation. IEEE Trans. Pattern Anal. Mach. Intell. 41(1): 162-175 (2019) - [j9]Julian Steil, Marc Tonsen, Yusuke Sugano, Andreas Bulling:
InvisibleEye: Fully Embedded Mobile Eye Tracking Using Appearance-Based Gaze Estimation. GetMobile Mob. Comput. Commun. 23(2): 30-34 (2019) - [c37]Xucong Zhang, Yusuke Sugano, Andreas Bulling:
Evaluation of Appearance-Based Methods and Implications for Gaze-Based Applications. CHI 2019: 416 - [i11]Xucong Zhang, Yusuke Sugano, Andreas Bulling:
Evaluation of Appearance-Based Methods and Implications for Gaze-Based Applications. CoRR abs/1901.10906 (2019) - 2018
- [c36]Yutaro Miyauchi, Yusuke Sugano, Yasuyuki Matsushita:
Shape-Conditioned Image Generation by Learning Latent Appearance Representation from Unpaired Data. ACCV (6) 2018: 438-453 - [c35]Xucong Zhang, Michael Xuelin Huang, Yusuke Sugano, Andreas Bulling:
Training Person-Specific Gaze Estimators from User Interactions with Multiple Devices. CHI 2018: 624 - [c34]Hiroaki Santo, Michael Waechter, Masaki Samejima, Yusuke Sugano, Yasuyuki Matsushita:
Light Structure from Pin Motion: Simple and Accurate Point Light Calibration for Physics-Based Modeling. ECCV (3) 2018: 3-19 - [c33]Xucong Zhang, Yusuke Sugano, Andreas Bulling:
Revisiting data normalization for appearance-based gaze estimation. ETRA 2018: 12:1-12:9 - [c32]Keita Higuchi, Soichiro Matsuda, Rie Kamikubo, Takuya Enomoto, Yusuke Sugano, Junichi Yamamoto, Yoichi Sato:
Visualizing Gaze Direction to Support Video Coding of Social Attention for Children with Autism Spectrum Disorder. IUI 2018: 571-582 - [c31]Arif Khan, Ingmar Steiner, Yusuke Sugano, Andreas Bulling, Ross G. MacDonald:
A Multimodal Corpus of Expert Gaze and Behavior during Phonetic Segmentation Tasks. LREC 2018 - [c30]Julian Steil, Philipp Müller, Yusuke Sugano, Andreas Bulling:
Forecasting user attention during everyday mobile interactions using device-integrated and wearable sensors. MobileHCI 2018: 1:1-1:13 - [c29]Tatsuya Ishibashi, Yusuke Sugano, Yasuyuki Matsushita:
Gaze-guided Image Classification for Reflecting Perceptual Class Ambiguity. UIST (Adjunct Volume) 2018: 26-28 - [i10]Julian Steil, Philipp Müller, Yusuke Sugano, Andreas Bulling:
Forecasting User Attention During Everyday Mobile Interactions Using Device-Integrated and Wearable Sensors. CoRR abs/1801.06011 (2018) - [i9]Yutaro Miyauchi, Yusuke Sugano, Yasuyuki Matsushita:
Shape-conditioned Image Generation by Learning Latent Appearance Representation from Unpaired Data. CoRR abs/1811.11991 (2018) - 2017
- [j8]Marc Tonsen, Julian Steil, Yusuke Sugano, Andreas Bulling:
InvisibleEye: Mobile Eye Tracking Using Multiple Low-Resolution Cameras and Learning-Based Gaze Estimation. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1(3): 106:1-106:21 (2017) - [c28]Michaela Klauck, Yusuke Sugano, Andreas Bulling:
Noticeable or Distractive?: A Design Space for Gaze-Contingent User Interface Notifications. CHI Extended Abstracts 2017: 1779-1786 - [c27]Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling:
It's Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation. CVPR Workshops 2017: 2299-2308 - [c26]Ryohei Kuga, Asako Kanezaki, Masaki Samejima, Yusuke Sugano, Yasuyuki Matsushita:
Multi-task Learning Using Multi-modal Encoder-Decoder Networks with Shared Skip Connections. ICCV Workshops 2017: 403-411 - [c25]Hiroaki Santo, Masaki Samejima, Yusuke Sugano, Boxin Shi, Yasuyuki Matsushita:
Deep Photometric Stereo Network. ICCV Workshops 2017: 501-509 - [c24]Xucong Zhang, Yusuke Sugano, Andreas Bulling:
Everyday Eye Contact Detection Using Unsupervised Gaze Target Discovery. UIST 2017: 193-203 - [i8]Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling:
MPIIGaze: Real-World Dataset and Deep Appearance-Based Gaze Estimation. CoRR abs/1711.09017 (2017) - [i7]Arif Khan, Ingmar Steiner, Yusuke Sugano, Andreas Bulling, Ross G. MacDonald:
A Multimodal Corpus of Expert Gaze and Behavior during Phonetic Segmentation Tasks. CoRR abs/1712.04798 (2017) - 2016
- [c23]Pingmei Xu, Yusuke Sugano, Andreas Bulling:
Spatio-Temporal Modeling and Prediction of Visual Attention in Graphical User Interfaces. CHI 2016: 3299-3310 - [c22]Marc Tonsen, Xucong Zhang, Yusuke Sugano, Andreas Bulling:
Labelled pupils in the wild: a dataset for studying pupil detection in unconstrained environments. ETRA 2016: 139-142 - [c21]Mohsen Mansouryar, Julian Steil, Yusuke Sugano, Andreas Bulling:
3D gaze estimation from 2D pupil positions on monocular head-mounted eye trackers. ETRA 2016: 197-200 - [c20]Yusuke Sugano, Xucong Zhang, Andreas Bulling:
AggreGaze: Collective Estimation of Audience Attention on Public Displays. UIST 2016: 821-831 - [p1]Yoichi Sato, Yusuke Sugano, Akihiro Sugimoto, Yoshinori Kuno, Hideki Koike:
Sensing and Controlling Human Gaze in Daily Living Space for Human-Harmonized Information Environments. Human-Harmonized Information Technology (1) 2016: 199-237 - [i6]Mohsen Mansouryar, Julian Steil, Yusuke Sugano, Andreas Bulling:
3D Gaze Estimation from 2D Pupil Positions on Monocular Head-Mounted Eye Trackers. CoRR abs/1601.02644 (2016) - [i5]Yusuke Sugano, Andreas Bulling:
Seeing with Humans: Gaze-Assisted Neural Image Captioning. CoRR abs/1608.05203 (2016) - [i4]Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling:
It's Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation. CoRR abs/1611.08860 (2016) - 2015
- [j7]Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato, Hideki Koike:
Appearance-Based Gaze Estimation With Online Calibration From Mouse Operations. IEEE Trans. Hum. Mach. Syst. 45(6): 750-760 (2015) - [j6]Feng Lu, Yusuke Sugano, Takahiro Okabe, Yoichi Sato:
Gaze Estimation From Eye Appearance: A Head Pose-Free Method via Eye Image Synthesis. IEEE Trans. Image Process. 24(11): 3680-3693 (2015) - [c19]Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling:
Appearance-based gaze estimation in the wild. CVPR 2015: 4511-4520 - [c18]Erroll Wood, Tadas Baltrusaitis, Xucong Zhang, Yusuke Sugano, Peter Robinson, Andreas Bulling:
Rendering of Eyes for Eye-Shape Registration and Gaze Estimation. ICCV 2015: 3756-3764 - [c17]Yusuke Sugano, Andreas Bulling:
Self-Calibrating Head-Mounted Eye Trackers Using Egocentric Visual Saliency. UIST 2015: 363-372 - [i3]Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling:
Appearance-Based Gaze Estimation in the Wild. CoRR abs/1504.02863 (2015) - [i2]Erroll Wood, Tadas Baltrusaitis, Xucong Zhang, Yusuke Sugano, Peter Robinson, Andreas Bulling:
Rendering of Eyes for Eye-Shape Registration and Gaze Estimation. CoRR abs/1505.05916 (2015) - [i1]Marc Tonsen, Xucong Zhang, Yusuke Sugano, Andreas Bulling:
Labeled pupils in the wild: A dataset for studying pupil detection in unconstrained environments. CoRR abs/1511.05768 (2015) - 2014
- [j5]Feng Lu, Takahiro Okabe, Yusuke Sugano, Yoichi Sato:
Learning gaze biases with head motion for head pose-free gaze estimation. Image Vis. Comput. 32(3): 169-179 (2014) - [j4]Feng Lu, Yusuke Sugano, Takahiro Okabe, Yoichi Sato:
Adaptive Linear Regressionfor Appearance-Based Gaze Estimation. IEEE Trans. Pattern Anal. Mach. Intell. 36(10): 2033-2046 (2014) - [c16]Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato:
Learning-by-Synthesis for Appearance-Based 3D Gaze Estimation. CVPR 2014: 1821-1828 - [c15]Binbin Ye, Yusuke Sugano, Yoichi Sato:
Influence of stimulus and viewing task types on a learning-based visual saliency model. ETRA 2014: 271-274 - [c14]Thies Pfeiffer, Sophie Stellmach, Yusuke Sugano:
4th international workshop on pervasive eye tracking and mobile eye-based interaction. UbiComp Adjunct 2014: 1085-1091 - 2013
- [j3]Isarun Chamveha, Yusuke Sugano, Daisuke Sugimura, Teera Siriteerakul, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto:
Head direction estimation from low resolution images with scene adaptation. Comput. Vis. Image Underst. 117(10): 1502-1511 (2013) - [j2]Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato:
Appearance-Based Gaze Estimation Using Visual Saliency. IEEE Trans. Pattern Anal. Mach. Intell. 35(2): 329-341 (2013) - [j1]Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato:
Graph-based joint clustering of fixations and visual entities. ACM Trans. Appl. Percept. 10(2): 10:1-10:16 (2013) - [c13]Isarun Chamveha, Yusuke Sugano, Yoichi Sato, Akihiro Sugimoto:
Social Group Discovery from Surveillance Videos: A Data-Driven Approach with Attention-Based Cues. BMVC 2013 - 2012
- [c12]Keisuke Ogaki, Kris Makoto Kitani, Yusuke Sugano, Yoichi Sato:
Coupling eye-motion and ego-motion features for first-person activity recognition. CVPR Workshops 2012: 1-7 - [c11]Hideyuki Kubota, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, Kazuo Hiraki:
Incorporating visual field characteristics into a saliency map. ETRA 2012: 333-336 - [c10]Feng Lu, Yusuke Sugano, Takahiro Okabe, Yoichi Sato:
Head pose-free appearance-based gaze sensing via eye image synthesis. ICPR 2012: 1008-1011 - [c9]Yusuke Sugano, Kazuma Harada, Yoichi Sato:
Touch-consistent perspective for direct interaction under motion parallax. ITS 2012: 339-342 - 2011
- [c8]Feng Lu, Takahiro Okabe, Yusuke Sugano, Yoichi Sato:
A Head Pose-free Approach for Appearance-based Gaze Estimation. BMVC 2011: 1-11 - [c7]Feng Lu, Yusuke Sugano, Takahiro Okabe, Yoichi Sato:
Inferring human gaze from appearance via adaptive linear regression. ICCV 2011: 153-160 - [c6]Isarun Chamveha, Yusuke Sugano, Daisuke Sugimura, Teera Siriteerakul, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto:
Appearance-based head pose estimation with scene-specific adaptation. ICCV Workshops 2011: 1713-1720 - [c5]Kentaro Yamada, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, Kazuo Hiraki:
Attention Prediction in Egocentric Video Using Motion and Visual Saliency. PSIVT (1) 2011: 277-288 - 2010
- [c4]Kentaro Yamada, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, Kazuo Hiraki:
Can Saliency Map Models Predict Human Egocentric Visual Attention? ACCV Workshops (1) 2010: 420-429 - [c3]Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato:
Calibration-free gaze sensing using saliency maps. CVPR 2010: 2667-2674
2000 – 2009
- 2008
- [c2]Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato, Hideki Koike:
An Incremental Learning Method for Unconstrained Gaze Estimation. ECCV (3) 2008: 656-667 - 2007
- [c1]Yusuke Sugano, Yoichi Sato:
Person-Independent Monocular Tracking of Face and Facial Actions with Multilinear Models. AMFG 2007: 58-70
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-05 21:38 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint