default search action
7. ACML 2015: Hong Kong
- Proceedings of The 7th Asian Conference on Machine Learning, ACML 2015, Hong Kong, November 20-22, 2015. JMLR Workshop and Conference Proceedings 45, JMLR.org 2015
Preface
- Geoffrey Holmes, Tie-Yan Liu:
Preface. 7
Accepted Papers
- Inbal Horev, Florian Yger, Masashi Sugiyama:
Geometry-Aware Principal Component Analysis for Symmetric Positive Definite Matrices. 1-16 - Ata Kabán:
Non-asymptotic Analysis of Compressive Fisher Discriminants in terms of the Effective Dimension. 17-32 - Hiroaki Sasaki, Voot Tangkaratt, Masashi Sugiyama:
Sufficient Dimension Reduction via Direct Estimation of the Gradients of Logarithmic Conditional Densities. 33-48 - Yohei Kondo, Shin-ichi Maeda, Kohei Hayashi:
Bayesian Masking: Sparse Bayesian Estimation with Weaker Shrinkage Bias. 49-64 - Ata Kabán:
A New Look at Nearest Neighbours: Identifying Benign Input Geometries via Random Projections. 65-80 - Kostiantyn Antoniuk, Vojtech Franc, Václav Hlavác:
Consistency of structured output learning with missing labels. 81-95 - Fei Yu, Min-Ling Zhang:
Maximum Margin Partial Label Learning. 96-111 - Xiaowei Zhang, Li Cheng, Tingshao Zhu:
Robust Multivariate Regression with Grossly Corrupted Observations and Its Application to Personality Prediction. 112-126 - Yanan Bao, Xin Liu, Amit Pande:
Data-Guided Approach for Learning and Improving User Experience in Computer Networks. 127-142 - Liqiang Niu, Xin-Yu Dai, Shujian Huang, Jiajun Chen:
A Unified Framework for Jointly Learning Distributed Representations of Word and Attributes. 143-156 - Shaowu Liu, Gang Li, Truyen Tran, Yuan Jiang:
Preference Relation-based Markov Random Fields for Recommender Systems. 157-172 - Bin Li, Julia Yu, Jie Zhang, Bin Ke:
Detecting Accounting Frauds in Publicly Traded U.S. Firms: A Machine Learning Approach. 173-188 - Huanhuan Zhang, Jie Zhang, Carol J. Fung, Chang Xu:
Improving Sybil Detection via Graph Pruning and Regularization Techniques. 189-204 - Yiu-ming Cheung, Jian Lou:
Proximal Average Approximated Incremental Gradient Method for Composite Penalty Regularized Empirical Risk Minimization. 205-220 - Marthinus Christoffel du Plessis, Gang Niu, Masashi Sugiyama:
Class-prior Estimation for Learning from Positive and Unlabeled Data. 221-236 - Viet Huynh, Dinh Q. Phung, Svetha Venkatesh:
Streaming Variational Inference for Dirichlet Process Mixtures. 237-252 - Young-Jun Ko, Matthias W. Seeger:
Expectation Propagation for Rectified Linear Poisson Regression. 253-268 - Yanpeng Zhao, Yetian Chen, Kewei Tu, Jin Tian:
Curriculum Learning of Bayesian Network Structures. 269-284 - Tuan Duong Nguyen, Marthinus Christoffel du Plessis, Masashi Sugiyama:
Continuous Target Shift Adaptation in Supervised Learning. 285-300 - Wojciech Kotlowski, Krzysztof Dembczynski:
Surrogate regret bounds for generalized classification performance metrics. 301-316 - Yingce Xia, Wenkui Ding, Xu-Dong Zhang, Nenghai Yu, Tao Qin:
Budgeted Bandit Problems with Continuous Random Costs. 317-332 - Tingting Zhao, Gang Niu, Ning Xie, Jucheng Yang, Masashi Sugiyama:
Regularized Policy Gradients: Direct Variance Reduction in Policy Gradient Estimation. 333-348 - Wang-Zhou Dai, Zhi-Hua Zhou:
Statistical Unfolded Logic Learning. 349-361 - Le Shu, Longin Jan Latecki:
Integration of Single-view Graphs with Diffusion of Tensor Product Graphs for Multi-view Spectral Clustering. 362-377 - Ozan Irsoy, Ethem Alpaydin:
Autoencoder Trees. 378-390 - Adepu Ravi Sankar, Vineeth N. Balasubramanian:
Similarity-based Contrastive Divergence Methods for Energy-based Deep Learning Models. 391-406 - Yue Zhu, Wei Gao, Zhi-Hua Zhou:
One-Pass Multi-View Learning. 407-422 - Shuang Zhou, Gijs Schoenmakers, Evgueni N. Smirnov, Ralf Peeters, Kurt Driessens, Siqi Chen:
Largest Source Subset Selection for Instance Transfer. 423-438
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.