


Остановите войну!
for scientists:


default search action
Nathan Kallus
Person information

Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [j15]Nathan Kallus
, Xiaojie Mao
:
Stochastic Optimization Forests. Manag. Sci. 69(4): 1975-1994 (2023) - [c52]Nathan Kallus, Miruna Oprescu:
Robust and Agnostic Learning of Conditional Distributional Treatment Effects. AISTATS 2023: 6037-6060 - [c51]Andrew Bennett, Dipendra Misra, Nathan Kallus:
Provable Safe Reinforcement Learning with Binary Feedback. AISTATS 2023: 10871-10900 - [c50]Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, Masatoshi Uehara:
Inference on Strongly Identified Functionals of Weakly Identified Functions. COLT 2023: 2265 - [c49]Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, Masatoshi Uehara:
Minimax Instrumental Variable Regression and L2 Convergence Guarantees without Identification or Closedness. COLT 2023: 2291-2318 - [c48]Su Jia, Qian Xie, Nathan Kallus, Peter I. Frazier:
Smooth Non-stationary Bandits. ICML 2023: 14930-14944 - [c47]Miruna Oprescu, Jacob Dorn, Marah Ghoummaid, Andrew Jesson, Nathan Kallus, Uri Shalit:
B-Learner: Quasi-Oracle Bounds on Heterogeneous Causal Effects Under Hidden Confounding. ICML 2023: 26599-26618 - [c46]Masatoshi Uehara, Ayush Sekhari, Jason D. Lee, Nathan Kallus, Wen Sun:
Computationally Efficient PAC RL in POMDPs with Latent Determinism and Conditional Embeddings. ICML 2023: 34615-34641 - [c45]Kaiwen Wang, Nathan Kallus, Wen Sun:
Near-Minimax-Optimal Risk-Sensitive Reinforcement Learning with CVaR. ICML 2023: 35864-35907 - [i72]Su Jia, Qian Xie
, Nathan Kallus, Peter I. Frazier:
Smooth Non-Stationary Bandits. CoRR abs/2301.12366 (2023) - [i71]Masatoshi Uehara, Nathan Kallus, Jason D. Lee, Wen Sun:
Refined Value-Based Offline RL under Realizability and Partial Coverage. CoRR abs/2302.02392 (2023) - [i70]Kaiwen Wang, Nathan Kallus, Wen Sun:
Near-Minimax-Optimal Risk-Sensitive Reinforcement Learning with CVaR. CoRR abs/2302.03201 (2023) - [i69]Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, Masatoshi Uehara:
Minimax Instrumental Variable Regression and $L_2$ Convergence Guarantees without Identification or Closedness. CoRR abs/2302.05404 (2023) - [i68]Miruna Oprescu, Jacob Dorn, Marah Ghoummaid, Andrew Jesson, Nathan Kallus, Uri Shalit:
B-Learner: Quasi-Oracle Bounds on Heterogeneous Causal Effects Under Hidden Confounding. CoRR abs/2304.10577 (2023) - [i67]Wenhao Zhan, Masatoshi Uehara, Nathan Kallus, Jason D. Lee, Wen Sun:
Provable Offline Reinforcement Learning with Human Feedback. CoRR abs/2305.14816 (2023) - [i66]Kaiwen Wang, Kevin Zhou, Runzhe Wu, Nathan Kallus, Wen Sun:
The Benefits of Being Distributional: Small-Loss Bounds for Reinforcement Learning. CoRR abs/2305.15703 (2023) - [i65]Kaiwen Wang, Junxiong Wang, Yueying Li, Nathan Kallus, Immanuel Trummer, Wen Sun:
JoinGym: An Efficient Query Optimization Environment for Reinforcement Learning. CoRR abs/2307.11704 (2023) - [i64]Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, Masatoshi Uehara:
Source Condition Double Robust Inference on Functionals of Inverse Problems. CoRR abs/2307.13793 (2023) - [i63]Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian J. McAuley:
Large Language Models as Zero-Shot Conversational Recommenders. CoRR abs/2308.10053 (2023) - 2022
- [j14]Yichun Hu
, Nathan Kallus
, Xiaojie Mao
:
Smooth Contextual Bandits: Bridging the Parametric and Nondifferentiable Regret Regimes. Oper. Res. 70(6): 3261-3281 (2022) - [j13]Nathan Kallus
, Masatoshi Uehara
:
Efficiently Breaking the Curse of Horizon in Off-Policy Evaluation with Double Reinforcement Learning. Oper. Res. 70(6): 3282-3302 (2022) - [j12]Fredrik D. Johansson, Uri Shalit, Nathan Kallus, David A. Sontag:
Generalization Bounds and Representation Learning for Estimation of Potential Outcomes and Causal Effects. J. Mach. Learn. Res. 23: 166:1-166:50 (2022) - [j11]Vishal Gupta
, Nathan Kallus
:
Data Pooling in Stochastic Optimization. Manag. Sci. 68(3): 1595-1615 (2022) - [j10]Nathan Kallus
, Xiaojie Mao
, Angela Zhou
:
Assessing Algorithmic Fairness with Unobserved Protected Class Using Data Combination. Manag. Sci. 68(3): 1959-1981 (2022) - [j9]Yichun Hu
, Nathan Kallus
, Xiaojie Mao
:
Fast Rates for Contextual Linear Optimization. Manag. Sci. 68(6): 4236-4245 (2022) - [c44]Nathan Kallus, Angela Zhou:
Stateful Offline Contextual Policy Evaluation and Learning. AISTATS 2022: 11169-11194 - [c43]Shervin Ardeshir, Cristina Segalin, Nathan Kallus:
Estimating Structural Disparities for Face Models. CVPR 2022: 10348-10357 - [c42]Nathan Kallus:
Treatment Effect Risk: Bounds and Inference. FAccT 2022: 213 - [c41]Jonathan D. Chang, Kaiwen Wang, Nathan Kallus, Wen Sun:
Learning Bellman Complete Representations for Offline Policy Evaluation. ICML 2022: 2938-2971 - [c40]Nathan Kallus, Xiaojie Mao, Kaiwen Wang, Zhengyuan Zhou:
Doubly Robust Distributionally Robust Off-Policy Evaluation and Learning. ICML 2022: 10598-10632 - [c39]Nathan Kallus:
What's the Harm? Sharp Bounds on the Fraction Negatively Affected by Treatment. NeurIPS 2022 - [c38]Nathan Kallus, James McInerney:
The Implicit Delta Method. NeurIPS 2022 - [c37]Masatoshi Uehara, Ayush Sekhari, Jason D. Lee, Nathan Kallus, Wen Sun:
Provably Efficient Reinforcement Learning in Partially Observable Dynamical Systems. NeurIPS 2022 - [i62]Nathan Kallus, Xiaojie Mao, Kaiwen Wang, Zhengyuan Zhou:
Doubly Robust Distributionally Robust Off-Policy Evaluation and Learning. CoRR abs/2202.09667 (2022) - [i61]Shervin Ardeshir, Cristina Segalin, Nathan Kallus:
Estimating Structural Disparities for Face Models. CoRR abs/2204.06562 (2022) - [i60]Nathan Kallus:
What's the Harm? Sharp Bounds on the Fraction Negatively Affected by Treatment. CoRR abs/2205.10327 (2022) - [i59]Nathan Kallus, Miruna Oprescu:
Robust and Agnostic Learning of Conditional Distributional Treatment Effects. CoRR abs/2205.11486 (2022) - [i58]Masatoshi Uehara, Ayush Sekhari, Jason D. Lee, Nathan Kallus, Wen Sun:
Provably Efficient Reinforcement Learning in Partially Observable Dynamical Systems. CoRR abs/2206.12020 (2022) - [i57]Masatoshi Uehara, Ayush Sekhari, Jason D. Lee, Nathan Kallus, Wen Sun:
Computationally Efficient PAC RL in POMDPs with Latent Determinism and Conditional Embeddings. CoRR abs/2206.12081 (2022) - [i56]Jonathan D. Chang, Kaiwen Wang, Nathan Kallus, Wen Sun:
Learning Bellman Complete Representations for Offline Policy Evaluation. CoRR abs/2207.05837 (2022) - [i55]Masatoshi Uehara, Haruka Kiyohara, Andrew Bennett, Victor Chernozhukov, Nan Jiang, Nathan Kallus, Chengchun Shi, Wen Sun:
Future-Dependent Value-Based Off-Policy Evaluation in POMDPs. CoRR abs/2207.13081 (2022) - [i54]Andrew Bennett, Dipendra Misra, Nathan Kallus:
Provable Safe Reinforcement Learning with Binary Feedback. CoRR abs/2210.14492 (2022) - [i53]Nathan Kallus, James McInerney:
The Implicit Delta Method. CoRR abs/2211.06457 (2022) - [i52]Masatoshi Uehara, Chengchun Shi, Nathan Kallus:
A Review of Off-Policy Evaluation in Reinforcement Learning. CoRR abs/2212.06355 (2022) - 2021
- [j8]Nathan Kallus
, Angela Zhou
:
Minimax-Optimal Policy Learning Under Unobserved Confounding. Manag. Sci. 67(5): 2870-2890 (2021) - [c36]Andrew Bennett, Nathan Kallus, Lihong Li, Ali Mousavi:
Off-policy Evaluation in Infinite-Horizon Reinforcement Learning with Latent Confounders. AISTATS 2021: 1999-2007 - [c35]Yichun Hu, Nathan Kallus, Masatoshi Uehara:
Fast Rates for the Regret of Offline Reinforcement Learning. COLT 2021: 2462 - [c34]Nathan Kallus, Angela Zhou:
Fairness, Welfare, and Equity in Personalized Pricing. FAccT 2021: 296-314 - [c33]Nathan Kallus, Yuta Saito, Masatoshi Uehara:
Optimal Off-Policy Evaluation from Multiple Logging Policies. ICML 2021: 5247-5256 - [c32]Nikos Vlassis, Ashok Chandrashekar, Fernando Amat Gil, Nathan Kallus:
Control Variates for Slate Off-Policy Evaluation. NeurIPS 2021: 3667-3679 - [c31]Aurélien Bibaut, Nathan Kallus, Maria Dimakopoulou, Antoine Chambaz, Mark J. van der Laan:
Risk Minimization from Adaptively Collected Data: Guarantees for Supervised and Policy Learning. NeurIPS 2021: 19261-19273 - [c30]Aurélien Bibaut, Maria Dimakopoulou, Nathan Kallus, Antoine Chambaz, Mark J. van der Laan:
Post-Contextual-Bandit Inference. NeurIPS 2021: 28548-28559 - [i51]Yichun Hu, Nathan Kallus, Masatoshi Uehara:
Fast Rates for the Regret of Offline Reinforcement Learning. CoRR abs/2102.00479 (2021) - [i50]Masatoshi Uehara, Masaaki Imaizumi, Nan Jiang, Nathan Kallus, Wen Sun, Tengyang Xie:
Finite Sample Analysis of Minimax Offline Reinforcement Learning: Completeness, Fast Rates and First-Order Efficiency. CoRR abs/2102.02981 (2021) - [i49]Nathan Kallus, Xiaojie Mao, Masatoshi Uehara:
Causal Inference Under Unmeasured Confounding With Negative Controls: A Minimax Learning Approach. CoRR abs/2103.14029 (2021) - [i48]Aurélien Bibaut, Antoine Chambaz, Maria Dimakopoulou, Nathan Kallus, Mark J. van der Laan:
Post-Contextual-Bandit Inference. CoRR abs/2106.00418 (2021) - [i47]Aurélien Bibaut, Antoine Chambaz, Maria Dimakopoulou, Nathan Kallus, Mark J. van der Laan:
Risk Minimization from Adaptively Collected Data: Guarantees for Supervised and Policy Learning. CoRR abs/2106.01723 (2021) - [i46]Nikos Vlassis, Ashok Chandrashekar, Fernando Amat Gil, Nathan Kallus:
Control Variates for Slate Off-Policy Evaluation. CoRR abs/2106.07914 (2021) - [i45]James McInerney, Nathan Kallus:
Residual Overfit Method of Exploration. CoRR abs/2110.02919 (2021) - [i44]Nathan Kallus, Angela Zhou:
Stateful Offline Contextual Policy Evaluation and Learning. CoRR abs/2110.10081 (2021) - [i43]Andrew Bennett, Nathan Kallus:
Proximal Reinforcement Learning: Efficient Off-Policy Evaluation in Partially Observed Markov Decision Processes. CoRR abs/2110.15332 (2021) - [i42]Angela Zhou, Andrew Koo, Nathan Kallus, Rene Ropac, Richard Peterson, Stephen Koppel, Tiffany Bergin:
An Empirical Evaluation of the Impact of New York's Bail Reform on Crime Using Synthetic Controls. CoRR abs/2111.08664 (2021) - [i41]Jacob Dorn, Kevin Guo, Nathan Kallus:
Doubly-Valid/Doubly-Sharp Sensitivity Analysis for Causal Inference with Unmeasured Confounding. CoRR abs/2112.11449 (2021) - 2020
- [j7]Nathan Kallus
, Madeleine Udell
:
Dynamic Assortment Personalization in High Dimensions. Oper. Res. 68(4): 1020-1037 (2020) - [j6]Nathan Kallus:
Generalized Optimal Matching Methods for Causal Inference. J. Mach. Learn. Res. 21: 62:1-62:54 (2020) - [j5]Nathan Kallus, Masatoshi Uehara:
Double Reinforcement Learning for Efficient Off-Policy Evaluation in Markov Decision Processes. J. Mach. Learn. Res. 21: 167:1-167:63 (2020) - [j4]Dimitris Bertsimas
, Nathan Kallus
:
From Predictive to Prescriptive Analytics. Manag. Sci. 66(3): 1025-1044 (2020) - [c29]Yichun Hu, Nathan Kallus, Xiaojie Mao:
Smooth Contextual Bandits: Bridging the Parametric and Non-differentiable Regret Regimes. COLT 2020: 2007-2010 - [c28]Nathan Kallus, Xiaojie Mao, Angela Zhou:
Assessing algorithmic fairness with unobserved protected class using data combination. FAT* 2020: 110 - [c27]Andrew Bennett, Nathan Kallus:
Efficient Policy Learning from Surrogate-Loss Classification Reductions. ICML 2020: 788-798 - [c26]Nathan Kallus:
DeepMatch: Balancing Deep Covariate Representations for Causal Inference Using Adversarial Training. ICML 2020: 5067-5077 - [c25]Nathan Kallus, Masatoshi Uehara:
Double Reinforcement Learning for Efficient and Robust Off-Policy Evaluation. ICML 2020: 5078-5088 - [c24]Nathan Kallus, Masatoshi Uehara:
Statistically Efficient Off-Policy Policy Gradients. ICML 2020: 5089-5100 - [c23]Nathan Kallus, Masatoshi Uehara:
Doubly Robust Off-Policy Value and Gradient Estimation for Deterministic Policies. NeurIPS 2020 - [c22]Nathan Kallus, Angela Zhou:
Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning. NeurIPS 2020 - [i40]Fredrik D. Johansson, Uri Shalit, Nathan Kallus, David A. Sontag:
Generalization Bounds and Representation Learning for Estimation of Potential Outcomes and Causal Effects. CoRR abs/2001.07426 (2020) - [i39]Nathan Kallus, Masatoshi Uehara:
Statistically Efficient Off-Policy Policy Gradients. CoRR abs/2002.04014 (2020) - [i38]Nathan Kallus, Angela Zhou:
Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning. CoRR abs/2002.04518 (2020) - [i37]Andrew Bennett, Nathan Kallus:
Efficient Policy Learning from Surrogate-Loss Classification Reductions. CoRR abs/2002.05153 (2020) - [i36]Nathan Kallus, Xiaojie Mao:
On the role of surrogates in the efficient estimation of treatment effects with limited outcome data. CoRR abs/2003.12408 (2020) - [i35]Nathan Kallus:
Comment: Entropy Learning for Dynamic Treatment Regimes. CoRR abs/2004.02778 (2020) - [i34]Yichun Hu, Nathan Kallus:
DTR Bandit: Learning to Make Response-Adaptive Decisions With Low Regret. CoRR abs/2005.02791 (2020) - [i33]Nathan Kallus, Masatoshi Uehara:
Efficient Evaluation of Natural Stochastic Policies in Offline Reinforcement Learning. CoRR abs/2006.03886 (2020) - [i32]Nathan Kallus, Masatoshi Uehara:
Doubly Robust Off-Policy Value and Gradient Estimation for Deterministic Policies. CoRR abs/2006.03900 (2020) - [i31]Andrew Bennett, Nathan Kallus, Lihong Li, Ali Mousavi:
Off-policy Evaluation in Infinite-Horizon Reinforcement Learning with Latent Confounders. CoRR abs/2007.13893 (2020) - [i30]Nathan Kallus, Xiaojie Mao:
Stochastic Optimization Forests. CoRR abs/2008.07473 (2020) - [i29]Nathan Kallus, Yuta Saito, Masatoshi Uehara:
Optimal Off-Policy Evaluation from Multiple Logging Policies. CoRR abs/2010.11002 (2020) - [i28]Yichun Hu, Nathan Kallus, Xiaojie Mao:
Fast Rates for Contextual Linear Optimization. CoRR abs/2011.03030 (2020) - [i27]Nathan Kallus:
Rejoinder: New Objectives for Policy Learning. CoRR abs/2012.03130 (2020) - [i26]Andrew Bennett, Nathan Kallus:
The Variational Method of Moments. CoRR abs/2012.09422 (2020) - [i25]Nathan Kallus, Angela Zhou:
Fairness, Welfare, and Equity in Personalized Pricing. CoRR abs/2012.11066 (2020)
2010 – 2019
- 2019
- [c21]Nathan Kallus, Xiaojie Mao, Angela Zhou:
Interval Estimation of Individual-Level Causal Effects Under Unobserved Confounding. AISTATS 2019: 2281-2290 - [c20]Jiahao Chen
, Nathan Kallus, Xiaojie Mao, Geoffry Svacha, Madeleine Udell
:
Fairness Under Unawareness: Assessing Disparity When Protected Class Is Unobserved. FAT 2019: 339-348 - [c19]Nathan Kallus:
Classifying Treatment Responders Under Causal Effect Monotonicity. ICML 2019: 3201-3210 - [c18]Nathan Kallus, Masatoshi Uehara:
Intrinsically Efficient, Stable, and Bounded Off-Policy Evaluation for Reinforcement Learning. NeurIPS 2019: 3320-3329 - [c17]Nathan Kallus, Angela Zhou:
Assessing Disparate Impact of Personalized Interventions: Identifiability and Bounds. NeurIPS 2019: 3421-3432 - [c16]Nathan Kallus, Angela Zhou:
The Fairness of Risk Scores Beyond Classification: Bipartite Ranking and the XAUC Metric. NeurIPS 2019: 3433-3443 - [c15]Andrew Bennett, Nathan Kallus, Tobias Schnabel:
Deep Generalized Method of Moments for Instrumental Variable Analysis. NeurIPS 2019: 3559-3569 - [c14]Andrew Bennett, Nathan Kallus:
Policy Evaluation with Latent Confounders via Optimal Balance. NeurIPS 2019: 4827-4837 - [i24]Nathan Kallus:
Classifying Treatment Responders Under Causal Effect Monotonicity. CoRR abs/1902.05482 (2019) - [i23]Nathan Kallus, Angela Zhou:
The Fairness of Risk Scores Beyond Classification: Bipartite Ranking and the xAUC Metric. CoRR abs/1902.05826 (2019) - [i22]Andrew Bennett, Nathan Kallus, Tobias Schnabel:
Deep Generalized Method of Moments for Instrumental Variable Analysis. CoRR abs/1905.12495 (2019) - [i21]Vishal Gupta, Nathan Kallus:
Data-Pooling in Stochastic Optimization. CoRR abs/1906.00255 (2019) - [i20]Nathan Kallus, Xiaojie Mao, Angela Zhou:
Assessing Algorithmic Fairness with Unobserved Protected Class Using Data Combination. CoRR abs/1906.00285 (2019) - [i19]Nathan Kallus, Angela Zhou:
Assessing Disparate Impacts of Personalized Interventions: Identifiability and Bounds. CoRR abs/1906.01552 (2019) - [i18]Nathan Kallus, Masatoshi Uehara:
Intrinsically Efficient, Stable, and Bounded Off-Policy Evaluation for Reinforcement Learning. CoRR abs/1906.03735 (2019) - [i17]Nathan Kallus:
More Efficient Policy Learning via Optimal Retargeting. CoRR abs/1906.08611 (2019) - [i16]Andrew Bennett, Nathan Kallus:
Policy Evaluation with Latent Confounders via Optimal Balance. CoRR abs/1908.01920 (2019) - [i15]Nathan Kallus, Masatoshi Uehara:
Double Reinforcement Learning for Efficient Off-Policy Evaluation in Markov Decision Processes. CoRR abs/1908.08526 (2019) - [i14]Yichun Hu, Nathan Kallus, Xiaojie Mao:
Smooth Contextual Bandits: Bridging the Parametric and Non-differentiable Regret Regimes. CoRR abs/1909.02553 (2019) - [i13]Nathan Kallus, Masatoshi Uehara:
Efficiently Breaking the Curse of Horizon: Double Reinforcement Learning in Infinite-Horizon Processes. CoRR abs/1909.05850 (2019) - [i12]Nathan Kallus, Xiaojie Mao, Masatoshi Uehara:
Localized Debiased Machine Learning: Efficient Estimation of Quantile Treatment Effects, Conditional Value at Risk, and Beyond. CoRR abs/1912.12945 (2019) - 2018
- [j3]Dimitris Bertsimas
, Vishal Gupta
, Nathan Kallus:
Data-driven robust optimization. Math. Program. 167(2): 235-292 (2018) - [j2]Dimitris Bertsimas, Vishal Gupta
, Nathan Kallus
:
Robust sample average approximation. Math. Program. 171(1-2): 217-282 (2018) - [c13]Nathan Kallus, Angela Zhou:
Policy Evaluation and Optimization with Continuous Treatments. AISTATS 2018: 1243-1251 - [c12]Nathan Kallus:
Instrument-Armed Bandits. ALT 2018: 529-546 - [c11]Nathan Kallus, Angela Zhou:
Residual Unfairness in Fair Machine Learning from Prejudiced Data. ICML 2018: 2444-2453 - [c10]Nathan Kallus, Xiaojie Mao, Madeleine Udell:
Causal Inference with Noisy and Missing Covariates via Matrix Factorization. NeurIPS 2018: 6921-6932 - [c9]Nathan Kallus:
Balanced Policy Evaluation and Learning. NeurIPS 2018: 8909-8920 - [c8]Nathan Kallus, Angela Zhou:
Confounding-Robust Policy Improvement. NeurIPS 2018: 9289-9299 - [c7]Nathan Kallus, Aahlad Manas Puli, Uri Shalit:
Removing Hidden Confounding by Experimental Grounding. NeurIPS 2018: 10911-10920 - [i11]Nathan Kallus, Angela Zhou:
Policy Evaluation and Optimization with Continuous Treatments. CoRR abs/1802.06037 (2018) - [i10]Nathan Kallus, Angela Zhou:
Confounding-Robust Policy Improvement. CoRR abs/1805.08593 (2018) - [i9]Nathan Kallus, Xiaojie Mao, Madeleine Udell:
Causal Inference with Noisy and Missing Covariates via Matrix Factorization. CoRR abs/1806.00811 (2018) - [i8]Nathan Kallus, Angela Zhou:
Residual Unfairness in Fair Machine Learning from Prejudiced Data. CoRR abs/1806.02887 (2018) - [i7]