default search action
Journal of Machine Learning Research, Volume 23
Volume 23, 2022
- Subhabrata Majumdar, George Michailidis:
Joint Estimation and Inference for Data Integration Problems based on Multiple Multi-layered Gaussian Graphical Models. 1:1-1:53 - Shaogao Lv, Heng Lian:
Debiased Distributed Learning for Sparse Partial Linear Models in High Dimensions. 2:1-2:32 - Keith D. Levin, Asad Lodhia, Elizaveta Levina:
Recovering shared structure from multiple networks with unknown edge distributions. 3:1-3:48 - Lorenzo Rimella, Nick Whiteley:
Exploiting locality in high-dimensional Factorial hidden Markov models. 4:1-4:34 - Guillaume Ausset, Stéphan Clémençon, François Portier:
Empirical Risk Minimization under Random Censorship. 5:1-5:59 - Xi Peng, Yunfan Li, Ivor W. Tsang, Hongyuan Zhu, Jiancheng Lv, Joey Tianyi Zhou:
XAI Beyond Classification: Interpretable Neural Clustering. 6:1-6:28 - Justin D. Silverman, Kimberly Roche, Zachary C. Holmes, Lawrence A. David, Sayan Mukherjee:
Bayesian Multinomial Logistic Normal Models through Marginally Latent Matrix-T Processes. 7:1-7:42 - Michael Fairbank, Spyridon Samothrakis, Luca Citi:
Deep Learning in Target Space. 8:1-8:46 - Utkarsh Sharma, Jared Kaplan:
Scaling Laws from the Data Manifold Dimension. 9:1-9:34 - Florentina Bunea, Seth Strimas-Mackey, Marten H. Wegkamp:
Interpolating Predictors in High-Dimensional Factor Regression. 10:1-10:60 - Ali Devran Kara, Serdar Yüksel:
Near Optimality of Finite Memory Feedback Policies in Partially Observed Markov Decision Processes. 11:1-11:46 - Jayakumar Subramanian, Amit Sinha, Raihan Seraj, Aditya Mahajan:
Approximate Information State for Approximate Planning and Reinforcement Learning in Partially Observed Systems. 12:1-12:83 - Dimitris Bertsimas, Ryan Cory-Wright, Jean Pauphilet:
Solving Large-Scale Sparse PCA to Certifiable (Near) Optimality. 13:1-13:35 - Sarbojit Roy, Soham Sarkar, Subhajit Dutta, Anil Kumar Ghosh:
On Generalizations of Some Distance Based Classifiers for HDLSS Data. 14:1-14:41 - Alasdair Paren, Leonard Berrada, Rudra P. K. Poudel, M. Pawan Kumar:
A Stochastic Bundle Method for Interpolation. 15:1-15:57 - Kaixuan Wei, Angelica I. Avilés-Rivero, Jingwei Liang, Ying Fu, Hua Huang, Carola-Bibiane Schönlieb:
TFPnP: Tuning-free Plug-and-Play Proximal Algorithms with Applications to Inverse Imaging Problems. 16:1-16:48 - Michele Peruzzi, David B. Dunson:
Spatial Multivariate Trees for Big Data Bayesian Regression. 17:1-17:40 - Xuebin Zheng, Bingxin Zhou, Yu Guang Wang, Xiaosheng Zhuang:
Decimated Framelet System on Graphs and Fast G-Framelet Transforms. 18:1-18:68 - Oxana A. Manita, Mark A. Peletier, Jacobus W. Portegies, Jaron Sanders, Albert Senen-Cerda:
Universal Approximation in Dropout Neural Networks. 19:1-19:46 - Tomojit Ghosh, Michael Kirby:
Supervised Dimensionality Reduction and Visualization using Centroid-Encoder. 20:1-20:34 - Jakob Drefs, Enrico Guiraud, Jörg Lücke:
Evolutionary Variational Optimization of Generative Models. 21:1-21:51 - Ali Eshragh, Fred Roosta, Asef Nazari, Michael W. Mahoney:
LSAR: Efficient Leverage Score Sampling Algorithm for the Analysis of Big Time Series Data. 22:1-22:36 - Yuangang Pan, Ivor W. Tsang, Weijie Chen, Gang Niu, Masashi Sugiyama:
Fast and Robust Rank Aggregation against Model Misspecification. 23:1-23:35 - Derek Driggs, Jingwei Liang, Carola-Bibiane Schönlieb:
On Biased Stochastic Gradient Estimation. 24:1-24:43 - Maxime Vono, Daniel Paulin, Arnaud Doucet:
Efficient MCMC Sampling with Dimension-Free Convergence Rate using ADMM-type Splitting. 25:1-25:69 - Emir Demirovic, Anna Lukina, Emmanuel Hebrard, Jeffrey Chan, James Bailey, Christopher Leckie, Kotagiri Ramamohanarao, Peter J. Stuckey:
MurTree: Optimal Decision Trees via Dynamic Programming and Search. 26:1-26:47 - Narayana Santhanam, Venkatachalam Anantharam, Wojciech Szpankowski:
Data-Derived Weak Universal Consistency. 27:1-27:55 - Mohammed Rayyan Sheriff, Debasish Chatterjee:
Novel Min-Max Reformulations of Linear Inverse Problems. 28:1-28:46 - Kaiyi Ji, Junjie Yang, Yingbin Liang:
Theoretical Convergence of Multi-Step Model-Agnostic Meta-Learning. 29:1-29:41 - Augusto Fasano, Daniele Durante:
A Class of Conjugate Priors for Multinomial Probit Models which Includes the Multivariate Normal One. 30:1-30:26 - Jaouad Mourtada, Stéphane Gaïffas:
An improper estimator with optimal excess risk in misspecified density estimation and logistic regression. 31:1-31:49 - Horia Mania, Michael I. Jordan, Benjamin Recht:
Active Learning for Nonlinear System Identification with Guarantees. 32:1-32:30 - Tri M. Le, Bertrand S. Clarke:
Model Averaging Is Asymptotically Better Than Model Selection For Prediction. 33:1-33:53 - Weijing Tang, Jiaqi Ma, Qiaozhu Mei, Ji Zhu:
SODEN: A Scalable Continuous-Time Survival Model through Ordinary Differential Equation Networks. 34:1-34:29 - Guojun Zhang, Pascal Poupart, Yaoliang Yu:
Optimality and Stability in Non-Convex Smooth Games. 35:1-35:71 - Feihu Huang, Shangqian Gao, Jian Pei, Heng Huang:
Accelerated Zeroth-Order and First-Order Momentum Methods from Mini to Minimax Optimization. 36:1-36:70 - Matteo Pegoraro, Mario Beraha:
Projected Statistical Methods for Distributional Data on the Real Line with the Wasserstein Metric. 37:1-37:59 - Lorenzo Pacchiardi, Ritabrata Dutta:
Score Matched Neural Exponential Families for Likelihood-Free Inference. 38:1-38:71 - Jeremiah Birrell, Paul Dupuis, Markos A. Katsoulakis, Yannis Pantazis, Luc Rey-Bellet:
(f, Gamma)-Divergences: Interpolating between f-Divergences and Integral Probability Metrics. 39:1-39:70 - Nikita Puchkin, Vladimir G. Spokoiny:
Structure-adaptive Manifold Estimation. 40:1-40:62 - Timothy I. Cannings, Yingying Fan:
The correlation-assisted missing data estimator. 41:1-41:49 - Zhong Li, Jiequn Han, Weinan E, Qianxiao Li:
Approximation and Optimization Theory for Linear Continuous-Time Recurrent Neural Networks. 42:1-42:85 - Rory Mitchell, Joshua Cooper, Eibe Frank, Geoffrey Holmes:
Sampling Permutations for Shapley Value Estimation. 43:1-43:46 - Si Liu, Risheek Garrepalli, Dan Hendrycks, Alan Fern, Debashis Mondal, Thomas G. Dietterich:
PAC Guarantees and Effective Algorithms for Detecting Novel Categories. 44:1-44:47 - Kevin O'Connor, Kevin McGoff, Andrew B. Nobel:
Optimal Transport for Stationary Markov Chains via Policy Iteration. 45:1-45:52 - Wanrong Zhu, Zhipeng Lou, Wei Biao Wu:
Beyond Sub-Gaussian Noises: Sharp Concentration Analysis for Stochastic Gradient Descent. 46:1-46:22 - Jonathan Ho, Chitwan Saharia, William Chan, David J. Fleet, Mohammad Norouzi, Tim Salimans:
Cascaded Diffusion Models for High Fidelity Image Generation. 47:1-47:33 - Zhiyan Ding, Shi Chen, Qin Li, Stephen J. Wright:
Overparameterization of Deep ResNet: Zero Loss and Mean-field Analysis. 48:1-48:65 - Xinyi Wang, Lang Tong:
Innovations Autoencoder and its Application in One-class Anomalous Sequence Detection. 49:1-49:27 - Luong Ha Nguyen, James-A. Goulet:
Analytically Tractable Hidden-States Inference in Bayesian Neural Networks. 50:1-50:33 - Dominique Benielli, Baptiste Bauvin, Sokol Koço, Riikka Huusari, Cécile Capponi, Hachem Kadri, François Laviolette:
Toolbox for Multimodal Learn (scikit-multimodallearn). 51:1-51:7 - Zijun Gao, Trevor Hastie:
LinCDE: Conditional Density Estimation via Lindsey's Method. 52:1-52:55 - Philipp Bach, Victor Chernozhukov, Malte S. Kurz, Martin Spindler:
DoubleML - An Object-Oriented Implementation of Double Machine Learning in Python. 53:1-53:6 - Marius Lindauer, Katharina Eggensperger, Matthias Feurer, André Biedenkapp, Difan Deng, Carolin Benjamins, Tim Ruhkopf, René Sass, Frank Hutter:
SMAC3: A Versatile Bayesian Optimization Package for Hyperparameter Optimization. 54:1-54:9 - Terrance D. Savitsky, Matthew R. Williams, Jingchen Hu:
Bayesian Pseudo Posterior Mechanism under Asymptotic Differential Privacy. 55:1-55:37 - Victor Guilherme Turrisi da Costa, Enrico Fini, Moin Nabi, Nicu Sebe, Elisa Ricci:
solo-learn: A Library of Self-supervised Methods for Visual Representation Learning. 56:1-56:6 - Han Zhao, Geoffrey J. Gordon:
Inherent Tradeoffs in Learning Fair Representations. 57:1-57:26 - Craig M. Lewis, Francesco Grossetti:
A Statistical Approach for Optimal Topic Model Identification. 58:1-58:20 - Carlos Fernández-Loría, Foster J. Provost:
Causal Classification: Treatment Effect Estimation vs. Outcome Prediction. 59:1-59:35 - Xun Zhang, William B. Haskell, Zhisheng Ye:
A Unifying Framework for Variance-Reduced Algorithms for Findings Zeroes of Monotone operators. 60:1-60:44 - Hengrui Luo, Giovanni Nattino, Matthew T. Pratola:
Sparse Additive Gaussian Process Regression. 61:1-61:34 - Manfred Jaeger:
The AIM and EM Algorithms for Learning from Coarse Data. 62:1-62:55 - Ben Sherwood, Adam Maidman:
Additive nonlinear quantile regression in ultra-high dimension. 63:1-63:47 - Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, Prasant Mohapatra:
Stochastic Zeroth-Order Optimization under Nonstationarity and Nonconvexity. 64:1-64:47 - Tianyi Lin, Nhat Ho, Marco Cuturi, Michael I. Jordan:
On the Complexity of Approximating Multimarginal Optimal Transport. 65:1-65:43 - Aaron J. Molstad:
New Insights for the Multivariate Square-Root Lasso. 66:1-66:52 - Chiyuan Zhang, Samy Bengio, Yoram Singer:
Are All Layers Created Equal? 67:1-67:28 - Wei Zhu, Qiang Qiu, A. Robert Calderbank, Guillermo Sapiro, Xiuyuan Cheng:
Scaling-Translation-Equivariant Networks with Decomposed Convolutional Filters. 68:1-68:45 - Alex Olshevsky:
Asymptotic Network Independence and Step-Size for a Distributed Subgradient Method. 69:1-69:32 - Asad Haris, Noah Simon, Ali Shojaie:
Generalized Sparse Additive Models. 70:1-70:56 - Wanjun Liu, Xiufan Yu, Runze Li:
Multiple-Splitting Projection Test for High-Dimensional Mean Vectors. 71:1-71:27 - Susanna Lange, Kyle Helfrich, Qiang Ye:
Batch Normalization Preconditioning for Neural Network Training. 72:1-72:41 - George Wynne, Andrew B. Duncan:
A Kernel Two-Sample Test for Functional Data. 73:1-73:51 - Ba-Hien Tran, Simone Rossi, Dimitrios Milios, Maurizio Filippone:
All You Need is a Good Functional Prior for Bayesian Deep Learning. 74:1-74:56 - Gábor Melis, András György, Phil Blunsom:
Mutual Information Constraints for Monte-Carlo Objectives to Prevent Posterior Collapse Especially in Language Modelling. 75:1-75:36 - Lilian Besson, Emilie Kaufmann, Odalric-Ambrym Maillard, Julien Seznec:
Efficient Change-Point Detection for Tackling Piecewise-Stationary Bandits. 77:1-77:40 - Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos:
Multi-Agent Online Optimization with Delays: Asynchronicity, Adaptivity, and Optimism. 78:1-78:49 - Yuling Yao, Aki Vehtari, Andrew Gelman:
Stacking for Non-mixing Bayesian Computations: The Curse and Blessing of Multimodal Posteriors. 79:1-79:45 - Marta Catalano, Pierpaolo De Blasi, Antonio Lijoi, Igor Prünster:
Posterior Asymptotics for Boosted Hierarchical Dirichlet Process Mixtures. 80:1-80:23 - David G. Harris, Thomas W. Pensyl, Aravind Srinivasan, Khoa Trinh:
Dependent randomized rounding for clustering and partition systems with knapsack constraints. 81:1-81:41 - Boxin Zhao, Y. Samuel Wang, Mladen Kolar:
FuDGE: A Method to Estimate a Functional Differential Graph in a High-Dimensional Setting. 82:1-82:82 - Yichi Zhang, Molei Liu, Matey Neykov, Tianxi Cai:
Prior Adaptive Semi-supervised Learning with Application to EHR Phenotyping. 83:1-83:25 - Rajarshi Guhaniyogi, Cheng Li, Terrance D. Savitsky, Sanvesh Srivastava:
Distributed Bayesian Varying Coefficient Modeling Using a Gaussian Process Prior. 84:1-84:59 - Zhanrui Cai, Runze Li, Yaowu Zhang:
A Distribution Free Conditional Independence Test with Applications to Causal Discovery. 85:1-85:41 - Chao Shen, Yu-Ting Lin, Hau-Tieng Wu:
Robust and scalable manifold learning via landmark diffusion for long-term medical signal processing. 86:1-86:30 - Rafael Izbicki, Gilson Y. Shimizu, Rafael Bassi Stern:
CD-split and HPD-split: Efficient Conformal Regions in High Dimensions. 87:1-87:32 - Hongzhi Liu, Yingpeng Du, Zhonghai Wu:
Generalized Ambiguity Decomposition for Ranking Ensemble Learning. 88:1-88:36 - Ines Chami, Sami Abu-El-Haija, Bryan Perozzi, Christopher Ré, Kevin Murphy:
Machine Learning on Graphs: A Model and Comprehensive Taxonomy. 89:1-89:64 - Xi Chen, Bo Jiang, Tianyi Lin, Shuzhong Zhang:
Accelerating Adaptive Cubic Regularization of Newton's Method via Random Sampling. 90:1-90:38 - Eran Malach, Shai Shalev-Shwartz:
When Hardness of Approximation Meets Hardness of Learning. 91:1-91:24 - Paz Fink Shustin, Haim Avron:
Gauss-Legendre Features for Gaussian Process Regression. 92:1-92:47 - Jakob Raymaekers, Ruben H. Zamar:
Regularized K-means Through Hard-Thresholding. 93:1-93:48 - Kweku Abraham, Ismael Castillo, Elisabeth Gassiat:
Multiple Testing in Nonparametric Hidden Markov Models: An Empirical Bayes Approach. 94:1-94:57 - Jan Niklas Böhm, Philipp Berens, Dmitry Kobak:
Attraction-Repulsion Spectrum in Neighbor Embeddings. 95:1-95:32 - Chunxiao Li, Cynthia Rudin, Tyler H. McCormick:
Rethinking Nonlinear Instrumental Variable Models through Prediction Validity. 96:1-96:55 - Daniel Sanz-Alonso, Ruiyi Yang:
Unlabeled Data Help in Graph-Based Semi-Supervised Learning: A Bayesian Nonparametrics Perspective. 97:1-97:28 - Hsiang-Fu Yu, Kai Zhong, Jiong Zhang, Wei-Cheng Chang, Inderjit S. Dhillon:
PECOS: Prediction for Enormous and Correlated Output Spaces. 98:1-98:32 - Qiong Zhang, Jiahua Chen:
Distributed Learning of Finite Gaussian Mixtures. 99:1-99:40 - Hannes Köhler, Andreas Christmann:
Total Stability of SVMs and Localized SVMs. 100:1-100:41 - Xiangyu Yang, Jiashan Wang, Hao Wang:
Towards An Efficient Approach for the Nonconvex lp Ball Projection: Algorithm and Analysis. 101:1-101:31 - Efstathia Bura, Liliana Forzani, Rodrigo García Arancibia, Pamela Llop, Diego Tomassi:
Sufficient reductions in regression with mixed predictors. 102:1-102:47 - Nir Weinberger, Guy Bresler:
The EM Algorithm is Adaptively-Optimal for Unbalanced Symmetric Gaussian Mixtures. 103:1-103:79 - F. Richard Guo, Emilija Perkovic:
Efficient Least Squares for Estimating Total Effects under Linearity and Causal Sufficiency. 104:1-104:41 - Michael Puthawala, Konik Kothari, Matti Lassas, Ivan Dokmanic, Maarten V. de Hoop:
Globally Injective ReLU Networks. 105:1-105:55 - Bokun Wang, Shiqian Ma, Lingzhou Xue:
Riemannian Stochastic Proximal Gradient Methods for Nonsmooth Optimization over the Stiefel Manifold. 106:1-106:33 - Christoffer Löffler, Christopher Mutschler:
IALE: Imitating Active Learner Ensembles. 107:1-107:29 - Daniel R. Kowal:
Bayesian subset selection and variable importance for interpretable prediction and classification. 108:1-108:38 - Kayvan Sadeghi, Terry Soo:
Conditions and Assumptions for Constraint-based Causal Structure Learning. 109:1-109:34 - Jun Ho Yoon, Seyoung Kim:
EiGLasso for Scalable Sparse Kronecker-Sum Inverse Covariance Estimation. 110:1-110:39 - Masaaki Imaizumi, Kenji Fukumizu:
Advantage of Deep Neural Networks for Estimating Functions with Singularity on Hypersurfaces. 111:1-111:54 - Shu Hu, Yiming Ying, Xin Wang, Siwei Lyu:
Sum of Ranked Range Loss for Supervised Learning. 112:1-112:44 - José Correa, Andrés Cristi, Boris Epstein, José A. Soto:
The Two-Sided Game of Googol. 113:1-113:37 - Kwan Ho Ryan Chan, Yaodong Yu, Chong You, Haozhi Qi, John Wright, Yi Ma:
ReduNet: A White-box Deep Network from the Principle of Maximizing Rate Reduction. 114:1-114:103 - Linh Tran, Maja Pantic, Marc Peter Deisenroth:
Cauchy-Schwarz Regularized Autoencoder. 115:1-115:37 - Jian Huang, Yuling Jiao, Zhen Li, Shiao Liu, Yang Wang, Yunfei Yang:
An Error Analysis of Generative Adversarial Networks for Learning Distributions. 116:1-116:43 - Chelsea Sidrane, Amir Maleki, Ahmed Irfan, Mykel J. Kochenderfer:
OVERT: An Algorithm for Safety Verification of Neural Network Control Policies for Nonlinear Systems. 117:1-117:45 - Hanyuan Hang, Yuchao Cai, Hanfang Yang, Zhouchen Lin:
Under-bagging Nearest Neighbors for Imbalanced Classification. 118:1-118:63 - Lei Wu, Jihao Long:
A spectral-based analysis of the separation between two-layer neural networks and linear methods. 119:1-119:34 - William Fedus, Barret Zoph, Noam Shazeer:
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity. 120:1-120:39 - Huang Fang, Nicholas J. A. Harvey, Victor S. Portella, Michael P. Friedlander:
Online Mirror Descent and Dual Averaging: Keeping Pace in the Dynamic Case. 121:1-121:38 - Luca Venturi, Samy Jelassi, Tristan Ozuch, Joan Bruna:
Depth separation beyond radial functions. 122:1-122:56 - Jian-Feng Cai, Jingyang Li, Dong Xia:
Provable Tensor-Train Format Tensor Completion by Riemannian Optimization. 123:1-123:77 - Julien Herzen, Francesco Lässig, Samuele Giuliano Piazzetta, Thomas Neuer, Léo Tafti, Guillaume Raille, Tomas Van Pottelbergh, Marek Pasieka, Andrzej Skrodzki, Nicolas Huguenin, Maxime Dumonal, Jan Koscisz, Dennis Bader, Frédérick Gusset, Mounir Benheddi, Camila Williamson, Michal Kosinski, Matej Petrik, Gaël Grosch:
Darts: User-Friendly Modern Machine Learning for Time Series. 124:1-124:6 - Niladri S. Chatterji, Philip M. Long:
Foolish Crowds Support Benign Overfitting. 125:1-125:12 - Sreejith Sreekumar, Ziv Goldfeld:
Neural Estimation of Statistical Divergences. 126:1-126:75 - Haoyuan Chen, Liang Ding, Rui Tuo:
Kernel Packet: An Exact and Scalable Algorithm for Gaussian Process Regression with Matérn Correlations. 127:1-127:32 - Jiaoyang Huang, Daniel Zhengyu Huang, Qing Yang, Guang Cheng:
Power Iteration for Tensor PCA. 128:1-128:47 - Washim Uddin Mondal, Mridul Agarwal, Vaneet Aggarwal, Satish V. Ukkusuri:
On the Approximation of Cooperative Heterogeneous Multi-Agent Reinforcement Learning (MARL) using Mean Field Control (MFC). 129:1-129:46