


Остановите войну!
for scientists:


default search action
Masashi Sugiyama
Person information

Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [j170]Zhenguo Wu, Jiaqi Lv, Masashi Sugiyama:
Learning With Proper Partial Labels. Neural Comput. 35(1): 58-81 (2023) - 2022
- [j169]Akira Tanimoto
, So Yamada
, Takashi Takenouchi, Masashi Sugiyama
, Hisashi Kashima:
Improving imbalanced classification using near-miss instances. Expert Syst. Appl. 201: 117130 (2022) - [j168]Hiroki Ishiguro, Takashi Ishida, Masashi Sugiyama:
Learning from Noisy Complementary Labels with Robust Loss Functions. IEICE Trans. Inf. Syst. 105-D(2): 364-376 (2022) - [j167]Yuangang Pan, Ivor W. Tsang, Weijie Chen, Gang Niu, Masashi Sugiyama:
Fast and Robust Rank Aggregation against Model Misspecification. J. Mach. Learn. Res. 23: 23:1-23:35 (2022) - [j166]Takayuki Osa
, Voot Tangkaratt, Masashi Sugiyama
:
Discovering diverse solutions in deep reinforcement learning by maximizing state-action-based mutual information. Neural Networks 152: 90-104 (2022) - [j165]Yutaka Matsuo
, Yann LeCun, Maneesh Sahani
, Doina Precup, David Silver, Masashi Sugiyama
, Eiji Uchibe
, Jun Morimoto:
Deep learning, reinforcement learning, and world models. Neural Networks 152: 267-275 (2022) - [j164]Kenji Doya, Karl J. Friston
, Masashi Sugiyama, Joshua B. Tenenbaum:
Neural Networks special issue on Artificial Intelligence and Brain Science. Neural Networks 155: 328-329 (2022) - [j163]Chen Gong
, Jian Yang
, Jane You
, Masashi Sugiyama
:
Centroid Estimation With Guaranteed Efficiency: A General Framework for Weakly Supervised Learning. IEEE Trans. Pattern Anal. Mach. Intell. 44(6): 2841-2855 (2022) - [j162]Ziqing Lu, Chang Xu
, Bo Du
, Takashi Ishida
, Lefei Zhang
, Masashi Sugiyama
:
LocalDrop: A Hybrid Regularization for Deep Neural Networks. IEEE Trans. Pattern Anal. Mach. Intell. 44(7): 3590-3601 (2022) - [c248]Han Bao, Takuya Shimada, Liyuan Xu, Issei Sato, Masashi Sugiyama:
Pairwise Supervision Can Provably Elicit a Decision Boundary. AISTATS 2022: 2618-2640 - [c247]Futoshi Futami, Tomoharu Iwata, Naonori Ueda, Issei Sato, Masashi Sugiyama:
Predictive variational Bayesian inference as risk-seeking optimization. AISTATS 2022: 5051-5083 - [c246]Masashi Sugiyama, Tongliang Liu, Bo Han, Yang Liu, Gang Niu:
Learning and Mining with Noisy Labels. CIKM 2022: 5152-5155 - [c245]De Cheng, Tongliang Liu, Yixiong Ning, Nannan Wang, Bo Han, Gang Niu, Xinbo Gao, Masashi Sugiyama:
Instance-Dependent Label-Noise Learning with Manifold-Regularized Transition Matrix Estimation. CVPR 2022: 16609-16618 - [c244]Haoang Chi, Feng Liu, Wenjing Yang, Long Lan, Tongliang Liu, Bo Han, Gang Niu, Mingyuan Zhou, Masashi Sugiyama:
Meta Discovery: Learning to Discover Novel Classes given Very Limited Data. ICLR 2022 - [c243]Nan Lu, Zhao Wang, Xiaoxiao Li, Gang Niu, Qi Dou, Masashi Sugiyama:
Federated Learning from Only Unlabeled Data with Class-conditional-sharing Clients. ICLR 2022 - [c242]Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama:
Sample Selection with Uncertainty of Losses for Learning with Noisy Labels. ICLR 2022 - [c241]Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Gang Niu, Masashi Sugiyama, Dacheng Tao:
Rethinking Class-Prior Estimation for Positive-Unlabeled Learning. ICLR 2022 - [c240]Fei Zhang, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Tao Qin, Masashi Sugiyama:
Exploiting Class Activation Value for Partial-Label Learning. ICLR 2022 - [c239]Jiaheng Wei, Hangyu Liu, Tongliang Liu, Gang Niu, Masashi Sugiyama, Yang Liu:
To Smooth or Not? When Label Smoothing Meets Noisy Labels. ICML 2022: 23589-23614 - [c238]Zeke Xie, Xinrui Wang, Huishuai Zhang, Issei Sato, Masashi Sugiyama:
Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum. ICML 2022: 24430-24459 - [c237]Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan S. Kankanhalli:
Adversarial Attack and Defense for Non-Parametric Two-Sample Tests. ICML 2022: 24743-24769 - [c236]Hanshu Yan, Jingfeng Zhang, Jiashi Feng, Masashi Sugiyama, Vincent Y. F. Tan:
Towards Adversarially Robust Deep Image Denoising. IJCAI 2022: 1516-1522 - [i186]Hanshu Yan, Jingfeng Zhang, Jiashi Feng, Masashi Sugiyama, Vincent Y. F. Tan:
Towards Adversarially Robust Deep Image Denoising. CoRR abs/2201.04397 (2022) - [i185]Takashi Ishida, Ikko Yamane, Nontawat Charoenphakdee, Gang Niu, Masashi Sugiyama:
Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification. CoRR abs/2202.00395 (2022) - [i184]Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan S. Kankanhalli:
Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests. CoRR abs/2202.03077 (2022) - [i183]Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Guanhao Gan, Shu-Tao Xia, Gang Niu, Masashi Sugiyama:
On the Effectiveness of Adversarial Training against Backdoor Attacks. CoRR abs/2202.10627 (2022) - [i182]Nan Lu, Zhao Wang, Xiaoxiao Li, Gang Niu, Qi Dou, Masashi Sugiyama:
Federated Learning from Only Unlabeled Data with Class-Conditional-Sharing Clients. CoRR abs/2204.03304 (2022) - [i181]Isao Ishikawa, Takeshi Teshima, Koichi Tojo, Kenta Oono, Masahiro Ikeda, Masashi Sugiyama:
Universal approximation property of invertible neural networks. CoRR abs/2204.07415 (2022) - [i180]Futoshi Futami, Tomoharu Iwata, Naonori Ueda, Issei Sato, Masashi Sugiyama:
Excess risk analysis for epistemic uncertainty with application to variational inference. CoRR abs/2206.01606 (2022) - [i179]De Cheng, Tongliang Liu, Yixiong Ning, Nannan Wang, Bo Han, Gang Niu, Xinbo Gao, Masashi Sugiyama:
Instance-Dependent Label-Noise Learning with Manifold-Regularized Transition Matrix Estimation. CoRR abs/2206.02791 (2022) - [i178]Charles Riou, Junya Honda, Masashi Sugiyama:
The Survival Bandit Problem. CoRR abs/2206.03019 (2022) - [i177]Yuting Tang, Nan Lu, Tianyi Zhang, Masashi Sugiyama:
Learning from Multiple Unlabeled Datasets with Partial Risk Regularization. CoRR abs/2207.01555 (2022) - [i176]Yong Bai, Yu-Jie Zhang, Peng Zhao, Masashi Sugiyama, Zhi-Hua Zhou:
Adapting to Online Label Shift with Provable Guarantees. CoRR abs/2207.02121 (2022) - [i175]Yivan Zhang, Jindong Wang, Xing Xie, Masashi Sugiyama:
Equivariant Disentangled Transformation for Domain Generalization under Combination Shift. CoRR abs/2208.02011 (2022) - [i174]Nobutaka Ito, Masashi Sugiyama:
Audio Signal Enhancement with Learning from Positive and Unlabelled Data. CoRR abs/2210.15143 (2022) - [i173]Jianan Zhou, Jianing Zhu, Jingfeng Zhang, Tongliang Liu, Gang Niu, Bo Han, Masashi Sugiyama:
Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks. CoRR abs/2211.00269 (2022) - [i172]Tingting Zhao, Ying Wang, Wei Sun, Yarui Chen, Gang Niu, Masashi Sugiyama:
Representation Learning for Continuous Action Spaces is Beneficial for Efficient Policy Learning. CoRR abs/2211.13257 (2022) - [i171]Shintaro Nakamura, Han Bao, Masashi Sugiyama:
Robust computation of optimal transport by β-potential regularization. CoRR abs/2212.13251 (2022) - 2021
- [j161]Motoya Ohnishi, Gennaro Notomista, Masashi Sugiyama, Magnus Egerstedt:
Constraint learning for control tasks with limited duration barrier functions. Autom. 127: 109504 (2021) - [j160]Tomoya Sakai, Gang Niu, Masashi Sugiyama:
Information-Theoretic Representation Learning for Positive-Unlabeled Classification. Neural Comput. 33(1): 244-268 (2021) - [j159]Takuya Shimada, Han Bao, Issei Sato, Masashi Sugiyama:
Classification From Pairwise Similarities/Dissimilarities and Unlabeled Data via Empirical Risk Minimization. Neural Comput. 33(5): 1234-1268 (2021) - [j158]Wenkai Xu, Gang Niu, Aapo Hyvärinen
, Masashi Sugiyama:
Direction Matters: On Influence-Preserving Graph Summarization and Max-Cut Principle for Directed Graphs. Neural Comput. 33(8): 2128-2162 (2021) - [j157]Zeke Xie, Fengxiang He, Shaopeng Fu, Issei Sato, Dacheng Tao, Masashi Sugiyama:
Artificial Neural Variability for Deep Learning: On Overfitting, Noise Memorization, and Catastrophic Forgetting. Neural Comput. 33(8): 2163-2192 (2021) - [j156]Taira Tsuchiya, Nontawat Charoenphakdee, Issei Sato, Masashi Sugiyama:
Semisupervised Ordinal Regression Based on Empirical Risk Minimization. Neural Comput. 33(12): 3361-3412 (2021) - [j155]Tianyi Zhang
, Ikko Yamane, Nan Lu, Masashi Sugiyama:
A One-Step Approach to Covariate Shift Adaptation. SN Comput. Sci. 2(4): 319 (2021) - [c235]Voot Tangkaratt, Nontawat Charoenphakdee, Masashi Sugiyama:
Robust Imitation Learning from Noisy Demonstrations. AISTATS 2021: 298-306 - [c234]Han Bao, Masashi Sugiyama:
Fenchel-Young Losses with Skewed Entropies for Class-posterior Probability Estimation. AISTATS 2021: 1648-1656 - [c233]Masahiro Fujisawa, Takeshi Teshima, Issei Sato, Masashi Sugiyama:
γ-ABC: Outlier-Robust Approximate Bayesian Computation Based on a Robust Divergence Estimator. AISTATS 2021: 1783-1791 - [c232]Paavo Parmas, Masashi Sugiyama:
A unified view of likelihood ratio and reparameterization gradients. AISTATS 2021: 4078-4086 - [c231]Masashi Sugiyama:
Mixture Proportion Estimation in Weakly Supervised Learning. CIKM Workshops 2021 - [c230]Nontawat Charoenphakdee, Jayakorn Vongkulbhisal, Nuttapong Chairatanakul, Masashi Sugiyama:
On Focal Loss for Class-Posterior Probability Estimation: A Theoretical Perspective. CVPR 2021: 5202-5211 - [c229]Alon Jacovi, Gang Niu, Yoav Goldberg, Masashi Sugiyama:
Scalable Evaluation and Improvement of Document Set Expansion via Neural Positive-Unlabeled Learning. EACL 2021: 581-592 - [c228]Zeke Xie, Issei Sato, Masashi Sugiyama:
A Diffusion Theory For Deep Learning Dynamics: Stochastic Gradient Descent Exponentially Favors Flat Minima. ICLR 2021 - [c227]Jingfeng Zhang, Jianing Zhu, Gang Niu, Bo Han, Masashi Sugiyama, Mohan S. Kankanhalli:
Geometry-aware Instance-reweighted Adversarial Training. ICLR 2021 - [c226]Antonin Berthon, Bo Han, Gang Niu, Tongliang Liu, Masashi Sugiyama:
Confidence Scores Make Instance-dependent Label-noise Learning Possible. ICML 2021: 825-836 - [c225]Yuzhou Cao, Lei Feng, Yitian Xu, Bo An, Gang Niu, Masashi Sugiyama:
Learning from Similarity-Confidence Data. ICML 2021: 1272-1282 - [c224]Nontawat Charoenphakdee, Zhenghang Cui, Yivan Zhang, Masashi Sugiyama:
Classification with Rejection Based on Cost-sensitive Classification. ICML 2021: 1507-1517 - [c223]Shuo Chen, Gang Niu, Chen Gong, Jun Li, Jian Yang, Masashi Sugiyama:
Large-Margin Contrastive Learning with Distance Polarization Regularizer. ICML 2021: 1673-1683 - [c222]Xuefeng Du, Jingfeng Zhang, Bo Han, Tongliang Liu, Yu Rong, Gang Niu, Junzhou Huang, Masashi Sugiyama:
Learning Diverse-Structured Networks for Adversarial Robustness. ICML 2021: 2880-2891 - [c221]Lei Feng, Senlin Shu, Nan Lu, Bo Han, Miao Xu
, Gang Niu, Bo An, Masashi Sugiyama:
Pointwise Binary Classification with Pairwise Confidence Comparisons. ICML 2021: 3252-3262 - [c220]Ruize Gao, Feng Liu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Masashi Sugiyama:
Maximum Mean Discrepancy Test is Aware of Adversarial Attacks. ICML 2021: 3564-3575 - [c219]Xuefeng Li, Tongliang Liu, Bo Han, Gang Niu, Masashi Sugiyama:
Provably End-to-end Label-noise Learning without Anchor Points. ICML 2021: 6403-6413 - [c218]Nan Lu, Shida Lei, Gang Niu, Issei Sato, Masashi Sugiyama:
Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification. ICML 2021: 7134-7144 - [c217]Zeke Xie, Li Yuan, Zhanxing Zhu, Masashi Sugiyama:
Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization. ICML 2021: 11448-11458 - [c216]Ikko Yamane, Junya Honda, Florian Yger, Masashi Sugiyama:
Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences. ICML 2021: 11637-11647 - [c215]Hanshu Yan, Jingfeng Zhang, Gang Niu, Jiashi Feng, Vincent Y. F. Tan, Masashi Sugiyama:
CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection. ICML 2021: 11693-11703 - [c214]Shuhei M. Yoshida, Takashi Takenouchi, Masashi Sugiyama:
Lower-Bounded Proper Losses for Weakly Supervised Classification. ICML 2021: 12110-12120 - [c213]Yivan Zhang, Gang Niu, Masashi Sugiyama:
Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization. ICML 2021: 12501-12512 - [c212]Futoshi Futami, Tomoharu Iwata, Naonori Ueda, Issei Sato, Masashi Sugiyama:
Loss function based second-order Jensen inequality and its application to particle variational inference. NeurIPS 2021: 6803-6815 - [c211]Qizhou Wang, Feng Liu, Bo Han, Tongliang Liu, Chen Gong, Gang Niu, Mingyuan Zhou, Masashi Sugiyama:
Probabilistic Margins for Instance Reweighting in Adversarial Training. NeurIPS 2021: 23258-23269 - [c210]Soham Dan, Han Bao, Masashi Sugiyama:
Learning from Noisy Similar and Dissimilar Data. ECML/PKDD (2) 2021: 233-249 - [c209]Takeshi Teshima, Masashi Sugiyama:
Incorporating causal graphical prior knowledge into predictive modeling via simple data augmentation. UAI 2021: 86-96 - [i170]Nontawat Charoenphakdee, Jongyeong Lee, Masashi Sugiyama:
A Symmetric Loss Perspective of Reliable Machine Learning. CoRR abs/2101.01366 (2021) - [i169]Masato Ishii, Masashi Sugiyama:
Source-free Domain Adaptation via Distributional Alignment by Matching Batch Normalization Statistics. CoRR abs/2101.10842 (2021) - [i168]Shida Lei, Nan Lu, Gang Niu, Issei Sato, Masashi Sugiyama:
Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification. CoRR abs/2102.00678 (2021) - [i167]Xuefeng Du, Jingfeng Zhang, Bo Han, Tongliang Liu, Yu Rong, Gang Niu, Junzhou Huang, Masashi Sugiyama:
Learning Diverse-Structured Networks for Adversarial Robustness. CoRR abs/2102.01886 (2021) - [i166]Xuefeng Li, Tongliang Liu, Bo Han, Gang Niu, Masashi Sugiyama:
Provably End-to-end Label-Noise Learning without Anchor Points. CoRR abs/2102.02400 (2021) - [i165]Yivan Zhang, Gang Niu, Masashi Sugiyama:
Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization. CoRR abs/2102.02414 (2021) - [i164]Jianing Zhu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Hongxia Yang, Mohan S. Kankanhalli, Masashi Sugiyama:
Understanding the Interaction of Adversarial Training with Noisy Labels. CoRR abs/2102.03482 (2021) - [i163]Hanshu Yan, Jingfeng Zhang, Gang Niu, Jiashi Feng, Vincent Y. F. Tan, Masashi Sugiyama:
CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection. CoRR abs/2102.05311 (2021) - [i162]Yuzhou Cao, Lei Feng, Yitian Xu, Bo An, Gang Niu, Masashi Sugiyama:
Learning from Similarity-Confidence Data. CoRR abs/2102.06879 (2021) - [i161]Chen Chen, Jingfeng Zhang, Xilie Xu, Tianlei Hu, Gang Niu, Gang Chen, Masashi Sugiyama:
Guided Interpolation for Adversarial Training. CoRR abs/2102.07327 (2021) - [i160]Takeshi Teshima, Masashi Sugiyama:
Incorporating Causal Graphical Prior Knowledge into Predictive Modeling via Simple Data Augmentation. CoRR abs/2103.00136 (2021) - [i159]Ziqing Lu, Chang Xu, Bo Du, Takashi Ishida, Lefei Zhang, Masashi Sugiyama:
LocalDrop: A Hybrid Regularization for Deep Neural Networks. CoRR abs/2103.00719 (2021) - [i158]Shuhei M. Yoshida
, Takashi Takenouchi, Masashi Sugiyama:
Lower-bounded proper losses for weakly supervised classification. CoRR abs/2103.02893 (2021) - [i157]Takayuki Osa, Voot Tangkaratt, Masashi Sugiyama:
Discovering Diverse Solutions in Deep Reinforcement Learning. CoRR abs/2103.07084 (2021) - [i156]Yivan Zhang, Masashi Sugiyama:
Approximating Instance-Dependent Noise via Instance-Confidence Embedding. CoRR abs/2103.13569 (2021) - [i155]Zeke Xie, Li Yuan, Zhanxing Zhu, Masashi Sugiyama:
Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization. CoRR abs/2103.17182 (2021) - [i154]Jingfeng Zhang, Xilie Xu, Bo Han, Tongliang Liu, Gang Niu, Lizhen Cui, Masashi Sugiyama:
NoiLIn: Do Noisy Labels Always Hurt Adversarial Training? CoRR abs/2105.14676 (2021) - [i153]Paavo Parmas, Masashi Sugiyama:
A unified view of likelihood ratio and reparameterization gradients. CoRR abs/2105.14900 (2021) - [i152]Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama:
Sample Selection with Uncertainty of Losses for Learning with Noisy Labels. CoRR abs/2106.00445 (2021) - [i151]Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama:
Instance Correction for Learning with Open-set Noisy Labels. CoRR abs/2106.00455 (2021) - [i150]Futoshi Futami, Tomoharu Iwata, Naonori Ueda, Issei Sato, Masashi Sugiyama:
Loss function based second-order Jensen inequality and its application to particle variational inference. CoRR abs/2106.05010 (2021) - [i149]Jiaqi Lv, Lei Feng, Miao Xu, Bo An, Gang Niu, Xin Geng, Masashi Sugiyama:
On the Robustness of Average Losses for Partial-Label Learning. CoRR abs/2106.06152 (2021) - [i148]Qizhou Wang, Feng Liu, Bo Han, Tongliang Liu, Chen Gong, Gang Niu, Mingyuan Zhou, Masashi Sugiyama:
Probabilistic Margins for Instance Reweighting in Adversarial Training. CoRR abs/2106.07904 (2021) - [i147]Yuzhou Cao, Lei Feng, Senlin Shu, Yitian Xu, Bo An, Gang Niu, Masashi Sugiyama:
Multi-Class Classification from Single-Class Data with Confidences. CoRR abs/2106.08864 (2021) - [i146]Xin-Qiang Cai, Yao-Xiang Ding, Zi-Xuan Chen, Yuan Jiang, Masashi Sugiyama, Zhi-Hua Zhou:
Seeing Differently, Acting Similarly: Imitation Learning with Heterogeneous Observations. CoRR abs/2106.09256 (2021) - [i145]Shota Nakajima, Masashi Sugiyama:
Positive-Unlabeled Classification under Class-Prior Shift: A Prior-invariant Approach Based on Density Ratio Estimation. CoRR abs/2107.05045 (2021) - [i144]Ikko Yamane, Junya Honda, Florian Yger, Masashi Sugiyama:
Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences. CoRR abs/2107.08135 (2021) - [i143]Cheng-Yu Hsieh, Wei-I Lin, Miao Xu, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama:
Active Refinement for Multi-Label Learning: A Pseudo-Label Approach. CoRR abs/2109.14676 (2021) - [i142]Nan Lu, Tianyi Zhang, Tongtong Fang, Takeshi Teshima, Masashi Sugiyama:
Rethinking Importance Weighting for Transfer Learning. CoRR abs/2112.10157 (2021) - [i141]Zhenguo Wu, Masashi Sugiyama:
Learning with Proper Partial Labels. CoRR abs/2112.12303 (2021) - 2020
- [j154]Janya Sainui, Masashi Sugiyama
:
Unsupervised key frame selection using information theory and colour histogram difference. Int. J. Bus. Intell. Data Min. 16(3): 324-344 (2020) - [j153]Yongchan Kwon, Wonyoung Kim, Masashi Sugiyama
, Myunghee Cho Paik:
Principled analytic classifier for positive-unlabeled learning via weighted integral probability metric. Mach. Learn. 109(3): 513-532 (2020) - [j152]Si-An Chen
, Voot Tangkaratt, Hsuan-Tien Lin, Masashi Sugiyama
:
Active deep Q-learning with demonstration. Mach. Learn. 109(9-10): 1699-1725 (2020) - [j151]Naoya Otani, Yosuke Otsubo, Tetsuya Koike, Masashi Sugiyama:
Binary classification with ambiguous training data. Mach. Learn. 109(12): 2369-2388 (2020) - [j150]Zhenghang Cui, Nontawat Charoenphakdee, Issei Sato, Masashi Sugiyama
:
Classification from Triplet Comparison Data. Neural Comput. 32(3): 659-681 (2020) - [j149]Yuangang Pan
, Ivor W. Tsang, Avinash Kumar Singh
, Chin-Teng Lin
, Masashi Sugiyama:
Stochastic Multichannel Ranking with Brain Dynamics Preferences. Neural Comput. 32(8): 1499-1530 (2020) - [j148]Yuko Kuroki, Liyuan Xu, Atsushi Miyauchi, Junya Honda, Masashi Sugiyama:
Polynomial-Time Algorithms for Multiple-Arm Identification with Full-Bandit Feedback. Neural Comput. 32(9): 1733-1773 (2020) - [c208]Tianyi Zhang, Ikko Yamane, Nan Lu, Masashi Sugiyama:
A One-step Approach to Covariate Shift Adaptation. ACML 2020: 65-80 - [c207]Nan Lu, Tianyi Zhang, Gang Niu, Masashi Sugiyama:
Mitigating Overfitting in Supervised Classification from Two Unlabeled Datasets: A Consistent Risk Correction Approach. AISTATS 2020: 1115-1125 - [c206]Han Bao, Masashi Sugiyama:
Calibrated Surrogate Maximization of Linear-fractional Utility in Binary Classification. AISTATS 2020: 2337-2347 - [c205]Han Bao, Clayton Scott, Masashi Sugiyama:
Calibrated Surrogate Losses for Adversarially Robust Classification. COLT 2020: 408-451 - [c204]Yu-Ting Chou, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama:
Unbiased Risk Estimators Can Mislead: A Case Study of Learning with Complementary Labels. ICML 2020: 1929-1938 - [c203]Lei Feng, Takuo Kaneko, Bo Han, Gang Niu, Bo An, Masashi Sugiyama:
Learning with Multiple Complementary Labels. ICML 2020: 3072-3081 - [c202]Futoshi Futami, Issei Sato, Masashi Sugiyama:
Accelerating the diffusion-based ensemble sampling by non-reversible dynamics. ICML 2020: 3337-3347 - [c201]Bo Han, Gang Niu, Xingrui Yu, Quanming Yao, Miao Xu, Ivor W. Tsang, Masashi Sugiyama:
SIGUA: Forgetting May Make Learning with Noisy Labels More Robust. ICML 2020: 4006-4016 - [c200]Takashi Ishida, Ikko Yamane, Tomoya Sakai, Gang Niu, Masashi Sugiyama:
Do We Need Zero Training Loss After Achieving Zero Training Error? ICML 2020: 4604-4614 - [c199]