default search action
Aldo Pacchiano
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j2]Jonathan Lee, Weihao Kong, Aldo Pacchiano, Vidya Muthukumar, Emma Brunskill:
Estimating Optimal Policy Value in Linear Contextual Bandits Beyond Gaussianity. Trans. Mach. Learn. Res. 2024 (2024) - [c50]Christoph Dann, Claudio Gentile, Aldo Pacchiano:
Data-Driven Online Model Selection With Regret Guarantees. AISTATS 2024: 1531-1539 - [c49]Sinong Geng, Aldo Pacchiano, Andrey Kolobov, Ching-An Cheng:
Improving Offline RL by Blending Heuristics. ICLR 2024 - [c48]Dipendra Misra, Aldo Pacchiano, Robert E. Schapire:
Provable Interactive Learning with Hindsight Instruction Feedback. ICML 2024 - [e1]Reneta P. Barneva, Valentin E. Brimkov, Claudio Gentile, Aldo Pacchiano:
Artificial Intelligence and Image Analysis - 18th International Symposium on Artificial Intelligence and Mathematics, ISAIM 2024, and 22nd International Workshop on Combinatorial Image Analysis, IWCIA 2024, Fort Lauderdale, FL, USA, January 8-10, 2024, Revised Selected Papers. Lecture Notes in Computer Science 14494, Springer 2024, ISBN 978-3-031-63734-6 [contents] - [i65]Aldo Pacchiano, Jonathan N. Lee, Emma Brunskill:
Experiment Planning with Function Approximation. CoRR abs/2401.05193 (2024) - [i64]Aldo Pacchiano, Mohammad Ghavamzadeh, Peter L. Bartlett:
Contextual Bandits with Stage-wise Constraints. CoRR abs/2401.08016 (2024) - [i63]Chinmaya Kausik, Mirco Mutti, Aldo Pacchiano, Ambuj Tewari:
A Framework for Partially Observed Reward-States in RLHF. CoRR abs/2402.03282 (2024) - [i62]Nirjhar Das, Souradip Chakraborty, Aldo Pacchiano, Sayak Ray Chowdhury:
Provably Sample Efficient RLHF via Active Preference Optimization. CoRR abs/2402.10500 (2024) - [i61]Yilei Chen, Aldo Pacchiano, Ioannis Ch. Paschalidis:
Multiple-policy Evaluation via Density Estimation. CoRR abs/2404.00195 (2024) - [i60]Dipendra Misra, Aldo Pacchiano, Robert E. Schapire:
Provable Interactive Learning with Hindsight Instruction Feedback. CoRR abs/2404.09123 (2024) - [i59]Aida Afshar, Aldo Pacchiano:
Learning Rate-Free Reinforcement Learning: A Case for Model Selection with Non-Stationary Objectives. CoRR abs/2408.04046 (2024) - [i58]Aldo Pacchiano:
Second Order Bounds for Contextual Bandits with Function Approximation. CoRR abs/2409.16197 (2024) - [i57]Mingyu Chen, Aldo Pacchiano, Xuezhou Zhang:
State-free Reinforcement Learning. CoRR abs/2409.18439 (2024) - [i56]Chen Bo Calvin Zhang, Zhang-Wei Hong, Aldo Pacchiano, Pulkit Agrawal:
ORSO: Accelerating Reward Design via Online Reward Selection and Policy Optimization. CoRR abs/2410.13837 (2024) - 2023
- [c47]Aadirupa Saha, Aldo Pacchiano, Jonathan Lee:
Dueling RL: Reinforcement Learning with Trajectory Preferences. AISTATS 2023: 6263-6289 - [c46]Aldo Pacchiano, Peter L. Bartlett, Michael I. Jordan:
An Instance-Dependent Analysis for the Cooperative Multi-Player Multi-Armed Bandit. ALT 2023: 1166-1215 - [c45]Aldo Pacchiano, Drausin Wulsin, Robert A. Barton, Luis F. Voloch:
Neural Design for Genetic Perturbation Experiments. ICLR 2023 - [c44]Andrew Wagenmaker, Aldo Pacchiano:
Leveraging Offline Data in Online Reinforcement Learning. ICML 2023: 35300-35338 - [c43]Jonathan Lee, Annie Xie, Aldo Pacchiano, Yash Chandak, Chelsea Finn, Ofir Nachum, Emma Brunskill:
Supervised Pretraining Can Learn In-Context Reinforcement Learning. NeurIPS 2023 - [c42]Nataly Brukhim, Miro Dudík, Aldo Pacchiano, Robert E. Schapire:
A Unified Model and Dimension for Interactive Estimation. NeurIPS 2023 - [c41]Parnian Kassraie, Nicolas Emmenegger, Andreas Krause, Aldo Pacchiano:
Anytime Model Selection in Linear Bandits. NeurIPS 2023 - [c40]Aldo Pacchiano, Jonathan Lee, Emma Brunskill:
Experiment Planning with Function Approximation. NeurIPS 2023 - [i55]Jonathan N. Lee, Weihao Kong, Aldo Pacchiano, Vidya Muthukumar, Emma Brunskill:
Estimating Optimal Policy Value in General Linear Contextual Bandits. CoRR abs/2302.09451 (2023) - [i54]Sinong Geng, Aldo Pacchiano, Andrey Kolobov, Ching-An Cheng:
Improving Offline RL by Blending Heuristics. CoRR abs/2306.00321 (2023) - [i53]Aldo Pacchiano, Christoph Dann, Claudio Gentile:
Data-Driven Regret Balancing for Online Model Selection in Bandits. CoRR abs/2306.02869 (2023) - [i52]Nataly Brukhim, Miroslav Dudík, Aldo Pacchiano, Robert E. Schapire:
A Unified Model and Dimension for Interactive Estimation. CoRR abs/2306.06184 (2023) - [i51]Jonathan N. Lee, Annie Xie, Aldo Pacchiano, Yash Chandak, Chelsea Finn, Ofir Nachum, Emma Brunskill:
Supervised Pretraining Can Learn In-Context Reinforcement Learning. CoRR abs/2306.14892 (2023) - [i50]Parnian Kassraie, Aldo Pacchiano, Nicolas Emmenegger, Andreas Krause:
Anytime Model Selection in Linear Bandits. CoRR abs/2307.12897 (2023) - [i49]Elena Gal, Shaun Singh, Aldo Pacchiano, Ben Walker, Terry J. Lyons, Jakob N. Foerster:
Unbiased Decisions Reduce Regret: Adversarial Domain Adaptation for the Bank Loan Problem. CoRR abs/2308.08051 (2023) - 2022
- [c39]Robert Müller, Aldo Pacchiano:
Meta Learning MDPs with linear transition models. AISTATS 2022: 5928-5948 - [c38]Ted Moskovitz, Michael Arbel, Jack Parker-Holder, Aldo Pacchiano:
Towards an Understanding of Default Policies in Multitask Policy Optimization. AISTATS 2022: 10661-10686 - [c37]Tianyi Lin, Aldo Pacchiano, Yaodong Yu, Michael I. Jordan:
Online Nonsubmodular Minimization with Delayed Costs: From Full Information to Bandit Feedback. ICML 2022: 13441-13467 - [c36]Abhishek Gupta, Aldo Pacchiano, Yuexiang Zhai, Sham M. Kakade, Sergey Levine:
Unpacking Reward Shaping: Understanding the Benefits of Reward Engineering on Sample Complexity. NeurIPS 2022 - [c35]Aldo Pacchiano, Christoph Dann, Claudio Gentile:
Best of Both Worlds Model Selection. NeurIPS 2022 - [c34]Yingchen Xu, Jack Parker-Holder, Aldo Pacchiano, Philip J. Ball, Oleh Rybkin, Stephen Roberts, Tim Rocktäschel, Edward Grefenstette:
Learning General World Models in a Handful of Reward-Free Deployments. NeurIPS 2022 - [i48]Robert Müller, Aldo Pacchiano:
Meta Learning MDPs with Linear Transition Models. CoRR abs/2201.08732 (2022) - [i47]Tianyi Lin, Aldo Pacchiano, Yaodong Yu, Michael I. Jordan:
Online Nonsubmodular Minimization with Delayed Costs: From Full Information to Bandit Feedback. CoRR abs/2205.07217 (2022) - [i46]Aldo Pacchiano, Ofir Nachum, Nilesh Tripuraneni, Peter L. Bartlett:
Joint Representation Training in Sequential Tasks with Shared Structure. CoRR abs/2206.12441 (2022) - [i45]Aldo Pacchiano, Christoph Dann, Claudio Gentile:
Best of Both Worlds Model Selection. CoRR abs/2206.14912 (2022) - [i44]Aldo Pacchiano, Drausin Wulsin, Robert A. Barton, Luis F. Voloch:
Neural Design for Genetic Perturbation Experiments. CoRR abs/2207.12805 (2022) - [i43]Abhishek Gupta, Aldo Pacchiano, Yuexiang Zhai, Sham M. Kakade, Sergey Levine:
Unpacking Reward Shaping: Understanding the Benefits of Reward Engineering on Sample Complexity. CoRR abs/2210.09579 (2022) - [i42]Yingchen Xu, Jack Parker-Holder, Aldo Pacchiano, Philip J. Ball, Oleh Rybkin, Stephen J. Roberts, Tim Rocktäschel, Edward Grefenstette:
Learning General World Models in a Handful of Reward-Free Deployments. CoRR abs/2210.12719 (2022) - [i41]Andrew Wagenmaker, Aldo Pacchiano:
Leveraging Offline Data in Online Reinforcement Learning. CoRR abs/2211.04974 (2022) - [i40]Abhi Gupta, Ted Moskovitz, David Alvarez-Melis, Aldo Pacchiano:
Transfer RL via the Undo Maps Formalism. CoRR abs/2211.14469 (2022) - 2021
- [b1]Aldo Pacchiano:
Model Selection for Contextual Bandits and Reinforcement Learning. University of California, Berkeley, USA, 2021 - [c33]Aldo Pacchiano, Heinrich Jiang, Michael I. Jordan:
Robustness Guarantees for Mode Estimation with an Application to Bandits. AAAI 2021: 9277-9284 - [c32]Heinrich Jiang, Qijia Jiang, Aldo Pacchiano:
Learning the Truth From Only One Side of the Story. AISTATS 2021: 2413-2421 - [c31]Aldo Pacchiano, Mohammad Ghavamzadeh, Peter L. Bartlett, Heinrich Jiang:
Stochastic Bandits with Linear Constraints. AISTATS 2021: 2827-2835 - [c30]Jonathan N. Lee, Aldo Pacchiano, Vidya Muthukumar, Weihao Kong, Emma Brunskill:
Online Model Selection for Reinforcement Learning with Function Approximation. AISTATS 2021: 3340-3348 - [c29]Ashok Cutkosky, Christoph Dann, Abhimanyu Das, Claudio Gentile, Aldo Pacchiano, Manish Purohit:
Dynamic Balancing for Model Selection in Bandits and RL. ICML 2021: 2276-2285 - [c28]Dhruv Malik, Aldo Pacchiano, Vishwak Srinivasan, Yuanzhi Li:
Sample Efficient Reinforcement Learning In Continuous State Spaces: A Perspective Beyond Linearity. ICML 2021: 7412-7422 - [c27]Aldo Pacchiano, Jonathan N. Lee, Peter L. Bartlett, Ofir Nachum:
Near Optimal Policy Optimization via REPS. NeurIPS 2021: 1100-1110 - [c26]Niladri S. Chatterji, Aldo Pacchiano, Peter L. Bartlett, Michael I. Jordan:
On the Theory of Reinforcement Learning with Once-per-Episode Feedback. NeurIPS 2021: 3401-3412 - [c25]Aldo Pacchiano, Shaun Singh, Edward Chou, Alexander C. Berg, Jakob N. Foerster:
Neural Pseudo-Label Optimism for the Bank Loan Problem. NeurIPS 2021: 6580-6593 - [c24]Ted Moskovitz, Jack Parker-Holder, Aldo Pacchiano, Michael Arbel, Michael I. Jordan:
Tactical Optimism and Pessimism for Deep Reinforcement Learning. NeurIPS 2021: 12849-12863 - [c23]Matteo Papini, Andrea Tirinzoni, Aldo Pacchiano, Marcello Restelli, Alessandro Lazaric, Matteo Pirotta:
Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection. NeurIPS 2021: 16371-16383 - [c22]Aldo Pacchiano, Philip J. Ball, Jack Parker-Holder, Krzysztof Choromanski, Stephen Roberts:
Towards tractable optimism in model-based reinforcement learning. UAI 2021: 1413-1423 - [i39]Silvia Chiappa, Aldo Pacchiano:
Fairness with Continuous Optimal Transport. CoRR abs/2101.02084 (2021) - [i38]Xingyou Song, Krzysztof Choromanski, Jack Parker-Holder, Yunhao Tang, Daiyi Peng, Deepali Jain, Wenbo Gao, Aldo Pacchiano, Tamás Sarlós, Yuxiang Yang:
ES-ENAS: Combining Evolution Strategies with Neural Architecture Search at No Extra Cost for Reinforcement Learning. CoRR abs/2101.07415 (2021) - [i37]Ted Moskovitz, Jack Parker-Holder, Aldo Pacchiano, Michael Arbel:
Deep Reinforcement Learning with Dynamic Optimism. CoRR abs/2102.03765 (2021) - [i36]Krzysztof Choromanski, Deepali Jain, Jack Parker-Holder, Xingyou Song, Valerii Likhosherstov, Anirban Santara, Aldo Pacchiano, Yunhao Tang, Adrian Weller:
Unlocking Pixels for Reinforcement Learning via Implicit Attention. CoRR abs/2102.04353 (2021) - [i35]Aldo Pacchiano, Jonathan N. Lee, Peter L. Bartlett, Ofir Nachum:
Near Optimal Policy Optimization via REPS. CoRR abs/2103.09756 (2021) - [i34]Jeffrey Chan, Aldo Pacchiano, Nilesh Tripuraneni, Yun S. Song, Peter L. Bartlett, Michael I. Jordan:
Parallelizing Contextual Linear Bandits. CoRR abs/2105.10590 (2021) - [i33]Niladri S. Chatterji, Aldo Pacchiano, Peter L. Bartlett, Michael I. Jordan:
On the Theory of Reinforcement Learning with Once-per-Episode Feedback. CoRR abs/2105.14363 (2021) - [i32]Dhruv Malik, Aldo Pacchiano, Vishwak Srinivasan, Yuanzhi Li:
Sample Efficient Reinforcement Learning In Continuous State Spaces: A Perspective Beyond Linearity. CoRR abs/2106.07814 (2021) - [i31]Matteo Papini, Andrea Tirinzoni, Aldo Pacchiano, Marcello Restelli, Alessandro Lazaric, Matteo Pirotta:
Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection. CoRR abs/2110.14798 (2021) - [i30]Ted Moskovitz, Michael Arbel, Jack Parker-Holder, Aldo Pacchiano:
Towards an Understanding of Default Policies in Multitask Policy Optimization. CoRR abs/2111.02994 (2021) - [i29]Aldo Pacchiano, Aadirupa Saha, Jonathan Lee:
Dueling RL: Reinforcement Learning with Trajectory Preferences. CoRR abs/2111.04850 (2021) - [i28]Aldo Pacchiano, Peter L. Bartlett, Michael I. Jordan:
An Instance-Dependent Analysis for the Cooperative Multi-Player Multi-Armed Bandit. CoRR abs/2111.04873 (2021) - [i27]Aldo Pacchiano, Shaun Singh, Edward Chou, Alexander C. Berg, Jakob N. Foerster:
Neural Pseudo-Label Optimism for the Bank Loan Problem. CoRR abs/2112.02185 (2021) - 2020
- [c21]Silvia Chiappa, Ray Jiang, Tom Stepleton, Aldo Pacchiano, Heinrich Jiang, John Aslanides:
A General Approach to Fairness with Optimal Transport. AAAI 2020: 3633-3640 - [c20]Krzysztof Choromanski, Aldo Pacchiano, Jack Parker-Holder, Yunhao Tang:
Practical Nonisotropic Monte Carlo Sampling in High Dimensions via Determinantal Point Processes. AISTATS 2020: 1363-1374 - [c19]Jonathan N. Lee, Aldo Pacchiano, Michael I. Jordan:
Convergence Rates of Smooth Message Passing with Rounding in Entropy-Regularized MAP Inference. AISTATS 2020: 3003-3014 - [c18]Xingyou Song, Wenbo Gao, Yuxiang Yang, Krzysztof Choromanski, Aldo Pacchiano, Yunhao Tang:
ES-MAML: Simple Hessian-Free Meta Learning. ICLR 2020 - [c17]Philip J. Ball, Jack Parker-Holder, Aldo Pacchiano, Krzysztof Choromanski, Stephen J. Roberts:
Ready Policy One: World Building Through Active Learning. ICML 2020: 591-601 - [c16]Krzysztof Choromanski, David Cheikhi, Jared Davis, Valerii Likhosherstov, Achille Nazaret, Achraf Bahamou, Xingyou Song, Mrugank Akarte, Jack Parker-Holder, Jacob Bergquist, Yuan Gao, Aldo Pacchiano, Tamás Sarlós, Adrian Weller, Vikas Sindhwani:
Stochastic Flows and Geometric Optimization on the Orthogonal Group. ICML 2020: 1918-1928 - [c15]Jonathan N. Lee, Aldo Pacchiano, Peter L. Bartlett, Michael I. Jordan:
Accelerated Message Passing for Entropy-Regularized MAP Inference. ICML 2020: 5736-5746 - [c14]Eric Mazumdar, Aldo Pacchiano, Yi-An Ma, Michael I. Jordan, Peter L. Bartlett:
On Approximate Thompson Sampling with Langevin Algorithms. ICML 2020: 6797-6807 - [c13]Aldo Pacchiano, Jack Parker-Holder, Yunhao Tang, Krzysztof Choromanski, Anna Choromanska, Michael I. Jordan:
Learning to Score Behaviors for Guided Policy Optimization. ICML 2020: 7445-7454 - [c12]Aldo Pacchiano, My Phan, Yasin Abbasi-Yadkori, Anup Rao, Julian Zimmert, Tor Lattimore, Csaba Szepesvári:
Model Selection in Contextual Stochastic Bandit Problems. NeurIPS 2020 - [c11]Jack Parker-Holder, Luke Metz, Cinjon Resnick, Hengyuan Hu, Adam Lerer, Alistair Letcher, Alexander Peysakhovich, Aldo Pacchiano, Jakob N. Foerster:
Ridge Rider: Finding Diverse Solutions by Following Eigenvectors of the Hessian. NeurIPS 2020 - [c10]Jack Parker-Holder, Aldo Pacchiano, Krzysztof Marcin Choromanski, Stephen J. Roberts:
Effective Diversity in Population Based Reinforcement Learning. NeurIPS 2020 - [i26]Jack Parker-Holder, Aldo Pacchiano, Krzysztof Choromanski, Stephen Roberts:
Effective Diversity in Population-Based Reinforcement Learning. CoRR abs/2002.00632 (2020) - [i25]Philip J. Ball, Jack Parker-Holder, Aldo Pacchiano, Krzysztof Choromanski, Stephen Roberts:
Ready Policy One: World Building Through Active Learning. CoRR abs/2002.02693 (2020) - [i24]Eric Mazumdar, Aldo Pacchiano, Yi-An Ma, Peter L. Bartlett, Michael I. Jordan:
On Thompson Sampling with Langevin Algorithms. CoRR abs/2002.10002 (2020) - [i23]Aldo Pacchiano, My Phan, Yasin Abbasi-Yadkori, Anup Rao, Julian Zimmert, Tor Lattimore, Csaba Szepesvári:
Model Selection in Contextual Stochastic Bandit Problems. CoRR abs/2003.01704 (2020) - [i22]Aldo Pacchiano, Heinrich Jiang, Michael I. Jordan:
Robustness Guarantees for Mode Estimation with an Application to Bandits. CoRR abs/2003.02932 (2020) - [i21]Krzysztof Choromanski, David Cheikhi, Jared Davis, Valerii Likhosherstov, Achille Nazaret, Achraf Bahamou, Xingyou Song, Mrugank Akarte, Jack Parker-Holder, Jacob Bergquist, Yuan Gao, Aldo Pacchiano, Tamás Sarlós, Adrian Weller, Vikas Sindhwani:
Stochastic Flows and Geometric Optimization on the Orthogonal Group. CoRR abs/2003.13563 (2020) - [i20]Heinrich Jiang, Qijia Jiang, Aldo Pacchiano:
Learning the Truth From Only One Side of the Story. CoRR abs/2006.04858 (2020) - [i19]Yasin Abbasi-Yadkori, Aldo Pacchiano, My Phan:
Regret Balancing for Bandit and RL Model Selection. CoRR abs/2006.05491 (2020) - [i18]Aldo Pacchiano, Mohammad Ghavamzadeh, Peter L. Bartlett, Heinrich Jiang:
Stochastic Bandits with Linear Constraints. CoRR abs/2006.10185 (2020) - [i17]Aldo Pacchiano, Philip J. Ball, Jack Parker-Holder, Krzysztof Choromanski, Stephen Roberts:
On Optimism in Model-Based Reinforcement Learning. CoRR abs/2006.11911 (2020) - [i16]Jonathan N. Lee, Aldo Pacchiano, Peter L. Bartlett, Michael I. Jordan:
Accelerated Message Passing for Entropy-Regularized MAP Inference. CoRR abs/2007.00699 (2020) - [i15]Jack Parker-Holder, Luke Metz, Cinjon Resnick, Hengyuan Hu, Adam Lerer, Alistair Letcher, Alex Peysakhovich, Aldo Pacchiano, Jakob N. Foerster:
Ridge Rider: Finding Diverse Solutions by Following Eigenvectors of the Hessian. CoRR abs/2011.06505 (2020) - [i14]Jonathan N. Lee, Aldo Pacchiano, Vidya Muthukumar, Weihao Kong, Emma Brunskill:
Online Model Selection for Reinforcement Learning with Function Approximation. CoRR abs/2011.09750 (2020) - [i13]Aldo Pacchiano, Christoph Dann, Claudio Gentile, Peter L. Bartlett:
Regret Bound Balancing and Elimination for Model Selection in Bandits and RL. CoRR abs/2012.13045 (2020)
2010 – 2019
- 2019
- [c9]Krzysztof Choromanski, Aldo Pacchiano, Jeffrey Pennington, Yunhao Tang:
KAMA-NNs: Low-dimensional Rotation Based Neural Networks. AISTATS 2019: 236-245 - [c8]Aldo Pacchiano, Yoram Bachrach:
Computing Stable Solutions in Threshold Network Flow Games With Bounded Treewidth. AAMAS 2019: 2153-2155 - [c7]Krzysztof Choromanski, Aldo Pacchiano, Jack Parker-Holder, Yunhao Tang, Deepali Jain, Yuxiang Yang, Atil Iscen, Jasmine Hsu, Vikas Sindhwani:
Provably Robust Blackbox Optimization for Reinforcement Learning. CoRL 2019: 683-696 - [c6]Niladri S. Chatterji, Aldo Pacchiano, Peter L. Bartlett:
Online learning with kernel losses. ICML 2019: 971-980 - [c5]Krzysztof Choromanski, Aldo Pacchiano, Jack Parker-Holder, Yunhao Tang, Vikas Sindhwani:
From Complexity to Simplicity: Adaptive ES-Active Subspaces for Blackbox Optimization. NeurIPS 2019: 10299-10309 - [c4]Ray Jiang, Aldo Pacchiano, Tom Stepleton, Heinrich Jiang, Silvia Chiappa:
Wasserstein Fair Classification. UAI 2019: 862-872 - [i12]Krzysztof Choromanski, Aldo Pacchiano, Jack Parker-Holder, Jasmine Hsu, Atil Iscen, Deepali Jain, Vikas Sindhwani:
When random search is not enough: Sample-Efficient and Noise-Robust Blackbox Optimization of RL Policies. CoRR abs/1903.02993 (2019) - [i11]Krzysztof Choromanski, Aldo Pacchiano, Jack Parker-Holder, Yunhao Tang:
Adaptive Sample-Efficient Blackbox Optimization via ES-active Subspaces. CoRR abs/1903.04268 (2019) - [i10]Krzysztof Choromanski, Aldo Pacchiano, Jack Parker-Holder, Yunhao Tang:
Structured Monte Carlo Sampling for Nonisotropic Distributions via Determinantal Point Processes. CoRR abs/1905.12667 (2019) - [i9]Aldo Pacchiano, Jack Parker-Holder, Yunhao Tang, Anna Choromanska, Krzysztof Choromanski, Michael I. Jordan:
Wasserstein Reinforcement Learning. CoRR abs/1906.04349 (2019) - [i8]Jonathan N. Lee, Aldo Pacchiano, Michael I. Jordan:
Approximate Sherali-Adams Relaxations for MAP Inference via Entropy Regularization. CoRR abs/1907.01127 (2019) - [i7]Xingyou Song, Krzysztof Choromanski, Jack Parker-Holder, Yunhao Tang, Wenbo Gao, Aldo Pacchiano, Tamás Sarlós, Deepali Jain, Yuxiang Yang:
Reinforcement Learning with Chromatic Networks. CoRR abs/1907.06511 (2019) - [i6]Ray Jiang, Aldo Pacchiano, Tom Stepleton, Heinrich Jiang, Silvia Chiappa:
Wasserstein Fair Classification. CoRR abs/1907.12059 (2019) - [i5]Xingyou Song, Wenbo Gao, Yuxiang Yang, Krzysztof Choromanski, Aldo Pacchiano, Yunhao Tang:
ES-MAML: Simple Hessian-Free Meta Learning. CoRR abs/1910.01215 (2019) - 2018
- [c3]Mark Rowland, Krzysztof Choromanski, François Chalus, Aldo Pacchiano, Tamás Sarlós, Richard E. Turner, Adrian Weller:
Geometrically Coupled Monte Carlo Sampling. NeurIPS 2018: 195-205 - [c2]Kush Bhatia, Aldo Pacchiano, Nicolas Flammarion, Peter L. Bartlett, Michael I. Jordan:
Gen-Oja: Simple & Efficient Algorithm for Streaming Generalized Eigenvector Computation. NeurIPS 2018: 7016-7025 - [i4]Mohammed Amin Abdullah, Aldo Pacchiano, Moez Draief:
A note on reinforcement learning with Wasserstein distance regularisation, with applications to multipolicy learning. CoRR abs/1802.03976 (2018) - [i3]Aldo Pacchiano, Niladri S. Chatterji, Peter L. Bartlett:
Online learning with kernel losses. CoRR abs/1802.09732 (2018) - [i2]Kush Bhatia, Aldo Pacchiano, Nicolas Flammarion, Peter L. Bartlett, Michael I. Jordan:
Gen-Oja: A Simple and Efficient Algorithm for Streaming Generalized Eigenvector Computation. CoRR abs/1811.08393 (2018) - 2017
- [c1]Mark Rowland, Aldo Pacchiano, Adrian Weller:
Conditions beyond treewidth for tightness of higher-order LP relaxations. AISTATS 2017: 10-18 - 2015
- [i1]Aldo Pacchiano, Oliver Williams:
Real time clustering of time series using triangular potentials. CoRR abs/1502.05090 (2015) - 2012
- [j1]Pavel Etingof, Sherry Gong, Aldo Pacchiano, Qingchun Ren, Travis Schedler:
Computational Approaches to Poisson Traces Associated to Finite Subgroups of SP2n(ℂ). Exp. Math. 21(2): 141-170 (2012)