


Остановите войну!
for scientists:


default search action
Quanquan Gu
Person information

Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2022
- [c159]Jinghui Chen, Yu Cheng, Zhe Gan, Quanquan Gu, Jingjing Liu:
Efficient Robust Training via Backward Smoothing. AAAI 2022: 6222-6230 - [c158]Yue Wu, Dongruo Zhou, Quanquan Gu:
Nearly Minimax Optimal Regret for Learning Infinite-horizon Average-reward MDPs with Linear Function Approximation. AISTATS 2022: 3883-3913 - [c157]Jiafan He, Dongruo Zhou, Quanquan Gu:
Near-optimal Policy Optimization Algorithms for Learning Adversarial Linear Mixture MDPs. AISTATS 2022: 4259-4280 - [c156]Spencer Frei, Difan Zou, Zixiang Chen, Quanquan Gu:
Self-training Converts Weak Learners to Strong Learners in Mixture Models. AISTATS 2022: 8003-8021 - [c155]Yue Wu, Tao Jin, Hao Lou, Pan Xu, Farzad Farnoud, Quanquan Gu:
Adaptive Sampling for Heterogeneous Rank Aggregation from Noisy Pairwise Comparisons. AISTATS 2022: 11014-11036 - [c154]Zixiang Chen, Dongruo Zhou, Quanquan Gu:
Faster Perturbed Stochastic Gradient Methods for Finding Local Minima. ALT 2022: 176-204 - [c153]Zixiang Chen, Dongruo Zhou, Quanquan Gu:
Almost Optimal Algorithms for Two-player Zero-Sum Linear Mixture Markov Games. ALT 2022: 227-261 - [c152]Zhe Wu, Aisha Alnajdi, Quanquan Gu, Panagiotis D. Christofides:
Machine-Learning-based Predictive Control of Nonlinear Processes with Uncertainty. ACC 2022: 2810-2816 - [c151]Pan Xu, Zheng Wen, Handong Zhao, Quanquan Gu:
Neural Contextual Bandits with Deep Representation and Shallow Exploration. ICLR 2022 - [c150]Yiling Jia, Weitong Zhang, Dongruo Zhou, Quanquan Gu, Hongning Wang:
Learning Neural Contextual Bandits through Perturbed Rewards. ICLR 2022 - [c149]Yihan Wang, Zhouxing Shi, Quanquan Gu, Cho-Jui Hsieh:
On the Convergence of Certified Robust Training with Interval Bound Propagation. ICLR 2022 - [c148]Yuanzhou Chen, Jiafan He, Quanquan Gu:
On the Sample Complexity of Learning Infinite-horizon Discounted Linear Kernel MDPs. ICML 2022: 3149-3183 - [c147]Yifei Min, Jiafan He, Tianhao Wang, Quanquan Gu:
Learning Stochastic Shortest Path with Linear Function Approximation. ICML 2022: 15584-15629 - [c146]Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Last Iterate Risk Bounds of SGD with Decaying Stepsize for Overparameterized Linear Regression. ICML 2022: 24280-24314 - [c145]Dongruo Zhou, Quanquan Gu:
Dimension-free Complexity Bounds for High-order Nonconvex Finite-sum Optimization. ICML 2022: 27143-27158 - [i102]Yiling Jia, Weitong Zhang, Dongruo Zhou, Quanquan Gu, Hongning Wang:
Learning Contextual Bandits Through Perturbed Rewards. CoRR abs/2201.09910 (2022) - [i101]Yuan Cao, Zixiang Chen, Mikhail Belkin, Quanquan Gu:
Benign Overfitting in Two-layer Convolutional Neural Networks. CoRR abs/2202.06526 (2022) - [i100]Heyang Zhao, Dongruo Zhou, Jiafan He, Quanquan Gu:
Bandit Learning with General Function Classes: Heteroscedastic Noise and Variance-dependent Regret Bounds. CoRR abs/2202.13603 (2022) - [i99]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Risk Bounds of Multi-Pass SGD for Least Squares in the Interpolation Regime. CoRR abs/2203.03159 (2022) - [i98]Yihan Wang, Zhouxing Shi, Quanquan Gu, Cho-Jui Hsieh:
On the Convergence of Certified Robust Training with Interval Bound Propagation. CoRR abs/2203.08961 (2022) - [i97]Jiafan He, Dongruo Zhou, Tong Zhang, Quanquan Gu:
Nearly Optimal Algorithms for Linear Contextual Bandits with Adversarial Corruptions. CoRR abs/2205.06811 (2022) - [i96]Dongruo Zhou, Quanquan Gu:
Computationally Efficient Horizon-Free Reinforcement Learning for Linear Mixture MDPs. CoRR abs/2205.11507 (2022) - [i95]Jiafan He, Tianhao Wang, Yifei Min, Quanquan Gu:
A Simple and Provably Efficient Algorithm for Asynchronous Federated Contextual Linear Bandits. CoRR abs/2207.03106 (2022) - [i94]Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
The Power and Limitation of Pretraining-Finetuning for Linear Regression under Covariate Shift. CoRR abs/2208.01857 (2022) - [i93]Zixiang Chen, Yihe Deng, Yue Wu, Quanquan Gu, Yuanzhi Li:
Towards Understanding Mixture of Experts in Deep Learning. CoRR abs/2208.02813 (2022) - [i92]Chris Junchi Li, Dongruo Zhou, Quanquan Gu, Michael I. Jordan:
Learning Two-Player Mixture Markov Games: Kernel Function Approximation and Correlated Equilibrium. CoRR abs/2208.05363 (2022) - [i91]Zixiang Chen, Chris Junchi Li, Angela Yuan, Quanquan Gu, Michael I. Jordan:
A General Framework for Sample-Efficient Function Approximation in Reinforcement Learning. CoRR abs/2209.15634 (2022) - [i90]Chenlu Ye, Wei Xiong, Quanquan Gu, Tong Zhang:
Corruption-Robust Algorithms with Uncertainty Weighting for Nonlinear Contextual Bandits and Markov Decision Processes. CoRR abs/2212.05949 (2022) - [i89]Jiafan He, Heyang Zhao, Dongruo Zhou, Quanquan Gu:
Nearly Minimax Optimal Reinforcement Learning for Linear Markov Decision Processes. CoRR abs/2212.06132 (2022) - 2021
- [j9]Bargav Jayaraman, Lingxiao Wang, Katherine Knipmeyer, Quanquan Gu, David Evans:
Revisiting Membership Inference Under Realistic Assumptions. Proc. Priv. Enhancing Technol. 2021(2): 348-368 (2021) - [j8]Bao Wang
, Difan Zou, Quanquan Gu, Stanley J. Osher:
Laplacian Smoothing Stochastic Gradient Markov Chain Monte Carlo. SIAM J. Sci. Comput. 43(1): A26-A53 (2021) - [c144]Tianyuan Jin, Pan Xu, Xiaokui Xiao, Quanquan Gu:
Double Explore-then-Commit: Asymptotic Optimality and Beyond. COLT 2021: 2584-2633 - [c143]Dongruo Zhou, Quanquan Gu, Csaba Szepesvári:
Nearly Minimax Optimal Reinforcement Learning for Linear Mixture Markov Decision Processes. COLT 2021: 4532-4576 - [c142]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Benign Overfitting of Constant-Stepsize SGD for Linear Regression. COLT 2021: 4633-4635 - [c141]Zixiang Chen, Yuan Cao, Difan Zou, Quanquan Gu:
How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks? ICLR 2021 - [c140]Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu:
Direction Matters: On the Implicit Bias of Stochastic Gradient Descent with Moderate Learning Rate. ICLR 2021 - [c139]Weitong Zhang, Dongruo Zhou, Lihong Li, Quanquan Gu:
Neural Thompson Sampling. ICLR 2021 - [c138]Spencer Frei, Yuan Cao, Quanquan Gu:
Agnostic Learning of Halfspaces with Gradient Descent via Soft Margins. ICML 2021: 3417-3426 - [c137]Spencer Frei, Yuan Cao, Quanquan Gu:
Provable Generalization of SGD-trained Neural Networks of Any Width in the Presence of Adversarial Label Noise. ICML 2021: 3427-3438 - [c136]Jiafan He, Dongruo Zhou, Quanquan Gu:
Logarithmic Regret for Reinforcement Learning with Linear Function Approximation. ICML 2021: 4171-4180 - [c135]Tianyuan Jin, Jing Tang, Pan Xu, Keke Huang, Xiaokui Xiao, Quanquan Gu:
Almost Optimal Anytime Algorithm for Batched Multi-Armed Bandits. ICML 2021: 5065-5073 - [c134]Tianyuan Jin, Pan Xu, Jieming Shi
, Xiaokui Xiao, Quanquan Gu:
MOTS: Minimax Optimal Thompson Sampling. ICML 2021: 5074-5083 - [c133]Dongruo Zhou, Jiafan He, Quanquan Gu:
Provably Efficient Reinforcement Learning for Discounted MDPs with Feature Mapping. ICML 2021: 12793-12802 - [c132]Difan Zou, Spencer Frei, Quanquan Gu:
Provable Robustness of Adversarial Training for Learning Halfspaces with Noise. ICML 2021: 13002-13011 - [c131]Difan Zou, Quanquan Gu:
On the Convergence of Hamiltonian Monte Carlo with Stochastic Gradients. ICML 2021: 13012-13022 - [c130]Yuan Cao, Zhiying Fang
, Yue Wu, Ding-Xuan Zhou, Quanquan Gu:
Towards Understanding the Spectral Bias of Deep Learning. IJCAI 2021: 2205-2211 - [c129]Lingxiao Wang, Kevin Huang, Tengyu Ma, Quanquan Gu, Jing Huang:
Variance-reduced First-order Meta-learning for Natural Language Processing Tasks. NAACL-HLT 2021: 2609-2615 - [c128]Weitong Zhang, Dongruo Zhou, Quanquan Gu:
Reward-Free Model-Based Reinforcement Learning with Linear Function Approximation. NeurIPS 2021: 1582-1593 - [c127]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Dean P. Foster, Sham M. Kakade:
The Benefits of Implicit Regularization from SGD in Least Squares Problems. NeurIPS 2021: 5456-5468 - [c126]Hanxun Huang, Yisen Wang, Sarah M. Erfani, Quanquan Gu, James Bailey, Xingjun Ma:
Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks. NeurIPS 2021: 5545-5559 - [c125]Boxi Wu, Jinghui Chen, Deng Cai, Xiaofei He, Quanquan Gu:
Do Wider Neural Networks Really Help Adversarial Robustness? NeurIPS 2021: 7054-7067 - [c124]Yifei Min, Tianhao Wang, Dongruo Zhou, Quanquan Gu:
Variance-Aware Off-Policy Evaluation with Linear Function Approximation. NeurIPS 2021: 7598-7610 - [c123]Spencer Frei, Quanquan Gu:
Proxy Convexity: A Unified Framework for the Analysis of Neural Networks Trained by Gradient Descent. NeurIPS 2021: 7937-7949 - [c122]Yuan Cao, Quanquan Gu, Mikhail Belkin:
Risk Bounds for Over-parameterized Maximum Margin Classification on Sub-Gaussian Mixtures. NeurIPS 2021: 8407-8418 - [c121]Yinglun Zhu, Dongruo Zhou, Ruoxi Jiang, Quanquan Gu, Rebecca Willett, Robert Nowak:
Pure Exploration in Kernel and Neural Bandits. NeurIPS 2021: 11618-11630 - [c120]Tianhao Wang, Dongruo Zhou, Quanquan Gu:
Provably Efficient Reinforcement Learning with Linear Function Approximation under Adaptivity Constraints. NeurIPS 2021: 13524-13536 - [c119]Jiafan He, Dongruo Zhou, Quanquan Gu:
Uniform-PAC Bounds for Reinforcement Learning with Linear Function Approximation. NeurIPS 2021: 14188-14199 - [c118]Jiafan He, Dongruo Zhou, Quanquan Gu:
Nearly Minimax Optimal Reinforcement Learning for Discounted MDPs. NeurIPS 2021: 22288-22300 - [c117]Luyao Yuan, Dongruo Zhou, Junhong Shen, Jingdong Gao, Jeffrey L. Chen, Quanquan Gu, Ying Nian Wu, Song-Chun Zhu:
Iterative Teacher-Aware Learning. NeurIPS 2021: 29231-29245 - [c116]Difan Zou, Pan Xu, Quanquan Gu:
Faster Convergence of Stochastic Gradient Langevin Dynamics for Non-Log-Concave Sampling. UAI 2021: 1152-1162 - [i88]Spencer Frei, Yuan Cao, Quanquan Gu:
Provable Generalization of SGD-trained Neural Networks of Any Width in the Presence of Adversarial Label Noise. CoRR abs/2101.01152 (2021) - [i87]Tianhao Wang, Dongruo Zhou, Quanquan Gu:
Provably Efficient Reinforcement Learning with Linear Function Approximation Under Adaptivity Constraints. CoRR abs/2101.02195 (2021) - [i86]Yue Wu, Dongruo Zhou, Quanquan Gu:
Nearly Minimax Optimal Regret for Learning Infinite-horizon Average-reward MDPs with Linear Function Approximation. CoRR abs/2102.07301 (2021) - [i85]Zixiang Chen, Dongruo Zhou, Quanquan Gu:
Almost Optimal Algorithms for Two-player Markov Games with Linear Function Approximation. CoRR abs/2102.07404 (2021) - [i84]Jiafan He, Dongruo Zhou, Quanquan Gu:
Nearly Optimal Regret for Learning Adversarial MDPs with Linear Function Approximation. CoRR abs/2102.08940 (2021) - [i83]Quanquan Gu, Amin Karbasi, Khashayar Khosravi, Vahab S. Mirrokni, Dongruo Zhou:
Batched Neural Bandits. CoRR abs/2102.13028 (2021) - [i82]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Benign Overfitting of Constant-Stepsize SGD for Linear Regression. CoRR abs/2103.12692 (2021) - [i81]Difan Zou, Spencer Frei, Quanquan Gu:
Provable Robustness of Adversarial Training for Learning Halfspaces with Noise. CoRR abs/2104.09437 (2021) - [i80]Yuan Cao, Quanquan Gu, Mikhail Belkin:
Risk Bounds for Over-parameterized Maximum Margin Classification on Sub-Gaussian Mixtures. CoRR abs/2104.13628 (2021) - [i79]Jiafan He, Dongruo Zhou, Quanquan Gu:
Uniform-PAC Bounds for Reinforcement Learning with Linear Function Approximation. CoRR abs/2106.11612 (2021) - [i78]Weitong Zhang, Jiafan He, Dongruo Zhou, Amy Zhang, Quanquan Gu:
Provably Efficient Representation Learning in Low-rank Markov Decision Processes. CoRR abs/2106.11935 (2021) - [i77]Yifei Min, Tianhao Wang, Dongruo Zhou, Quanquan Gu:
Variance-Aware Off-Policy Evaluation with Linear Function Approximation. CoRR abs/2106.11960 (2021) - [i76]Yinglun Zhu, Dongruo Zhou, Ruoxi Jiang, Quanquan Gu, Rebecca Willett, Robert D. Nowak:
Pure Exploration in Kernel and Neural Bandits. CoRR abs/2106.12034 (2021) - [i75]Spencer Frei, Quanquan Gu:
Proxy Convexity: A Unified Framework for the Analysis of Neural Networks Trained by Gradient Descent. CoRR abs/2106.13792 (2021) - [i74]Spencer Frei, Difan Zou, Zixiang Chen, Quanquan Gu:
Self-training Converts Weak Learners to Strong Learners in Mixture Models. CoRR abs/2106.13805 (2021) - [i73]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Dean P. Foster, Sham M. Kakade:
The Benefits of Implicit Regularization from SGD in Least Squares Problems. CoRR abs/2108.04552 (2021) - [i72]Difan Zou, Yuan Cao, Yuanzhi Li, Quanquan Gu:
Understanding the Generalization of Adam in Learning Neural Networks with Proper Regularization. CoRR abs/2108.11371 (2021) - [i71]Luyao Yuan, Dongruo Zhou, Junhong Shen, Jingdong Gao, Jeffrey L. Chen, Quanquan Gu, Ying Nian Wu, Song-Chun Zhu:
Iterative Teacher-Aware Learning. CoRR abs/2110.00137 (2021) - [i70]Hanxun Huang, Yisen Wang, Sarah Monazam Erfani, Quanquan Gu, James Bailey, Xingjun Ma:
Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks. CoRR abs/2110.03825 (2021) - [i69]Yue Wu, Tao Jin, Hao Lou, Pan Xu, Farzad Farnoud, Quanquan Gu:
Adaptive Sampling for Heterogeneous Rank Aggregation from Noisy Pairwise Comparisons. CoRR abs/2110.04136 (2021) - [i68]Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Last Iterate Risk Bounds of SGD with Decaying Stepsize for Overparameterized Linear Regression. CoRR abs/2110.06198 (2021) - [i67]Weitong Zhang, Dongruo Zhou, Quanquan Gu:
Reward-Free Model-Based Reinforcement Learning with Linear Function Approximation. CoRR abs/2110.06394 (2021) - [i66]Xiaoxia Wu, Lingxiao Wang, Irina Cristali, Quanquan Gu, Rebecca Willett:
Adaptive Differentially Private Empirical Risk Minimization. CoRR abs/2110.07435 (2021) - [i65]Chonghua Liao, Jiafan He, Quanquan Gu:
Locally Differentially Private Reinforcement Learning for Linear Mixture Markov Decision Processes. CoRR abs/2110.10133 (2021) - [i64]Heyang Zhao, Dongruo Zhou, Quanquan Gu:
Linear Contextual Bandits with Adversarial Corruptions. CoRR abs/2110.12615 (2021) - [i63]Yifei Min, Jiafan He, Tianhao Wang, Quanquan Gu:
Learning Stochastic Shortest Path with Linear Function Approximation. CoRR abs/2110.12727 (2021) - [i62]Zixiang Chen, Dongruo Zhou, Quanquan Gu:
Faster Perturbed Stochastic Gradient Methods for Finding Local Minima. CoRR abs/2110.13144 (2021) - [i61]Yisen Wang, Xingjun Ma, James Bailey, Jinfeng Yi, Bowen Zhou, Quanquan Gu:
On the Convergence and Robustness of Adversarial Training. CoRR abs/2112.08304 (2021) - [i60]Jinghui Chen, Yuan Cao, Quanquan Gu:
Benign Overfitting in Adversarially Robust Linear Classification. CoRR abs/2112.15250 (2021) - 2020
- [j7]Dongruo Zhou, Pan Xu, Quanquan Gu:
Stochastic Nested Variance Reduction for Nonconvex Optimization. J. Mach. Learn. Res. 21: 103:1-103:63 (2020) - [j6]Difan Zou, Yuan Cao, Dongruo Zhou, Quanquan Gu
:
Gradient descent optimizes over-parameterized deep ReLU networks. Mach. Learn. 109(3): 467-492 (2020) - [c115]Yuan Cao, Quanquan Gu:
Generalization Error Bounds of Gradient Descent for Learning Over-Parameterized Deep ReLU Networks. AAAI 2020: 3349-3356 - [c114]Jinghui Chen, Dongruo Zhou, Jinfeng Yi, Quanquan Gu:
A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks. AAAI 2020: 3486-3494 - [c113]Tao Jin, Pan Xu, Quanquan Gu, Farzad Farnoud:
Rank Aggregation via Heterogeneous Thurstone Preference Models. AAAI 2020: 4353-4360 - [c112]Lingxiao Wang, Quanquan Gu:
A Knowledge Transfer Framework for Differentially Private Sparse Learning. AAAI 2020: 6235-6242 - [c111]Xiao Zhang, Jinghui Chen, Quanquan Gu, David Evans:
Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models. AISTATS 2020: 3883-3893 - [c110]Dongruo Zhou, Quanquan Gu:
Stochastic Recursive Variance-Reduced Cubic Regularization Methods. AISTATS 2020: 3980-3990 - [c109]Dongruo Zhou, Yuan Cao, Quanquan Gu:
Accelerated Factored Gradient Descent for Low-Rank Matrix Factorization. AISTATS 2020: 4430-4440 - [c108]Pan Xu, Felicia Gao, Quanquan Gu:
Sample Efficient Policy Gradient Methods with Recursive Variance Reduction. ICLR 2020 - [c107]Yisen Wang, Difan Zou, Jinfeng Yi, James Bailey, Xingjun Ma, Quanquan Gu:
Improving Adversarial Robustness Requires Revisiting Misclassified Examples. ICLR 2020 - [c106]Lingxiao Wang, Jing Huang, Kevin Huang, Ziniu Hu, Guangtao Wang, Quanquan Gu:
Improving Neural Language Generation with Spectrum Control. ICLR 2020 - [c105]Difan Zou, Philip M. Long, Quanquan Gu:
On the Global Convergence of Training Deep Linear ResNets. ICLR 2020 - [c104]Yonatan Dukler, Quanquan Gu, Guido Montúfar:
Optimization Theory for ReLU Neural Networks Trained with Normalization Layers. ICML 2020: 2751-2760 - [c103]Pan Xu, Quanquan Gu:
A Finite-Time Analysis of Q-Learning with Neural Network Function Approximation. ICML 2020: 10555-10565 - [c102]Dongruo Zhou, Lihong Li, Quanquan Gu:
Neural Contextual Bandits with UCB-based Exploration. ICML 2020: 11492-11502 - [c101]Jinghui Chen, Dongruo Zhou, Yiqi Tang, Ziyan Yang, Yuan Cao, Quanquan Gu:
Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks. IJCAI 2020: 3267-3275 - [c100]Jinghui Chen, Quanquan Gu:
RayS: A Ray Searching Method for Hard-label Adversarial Attack. KDD 2020: 1739-1747 - [c99]Bao Wang, Quanquan Gu, March Boedihardjo, Lingxiao Wang, Farzin Barekat, Stanley J. Osher:
DP-LSSGD: A Stochastic Optimization Method to Lift the Utility in Privacy-Preserving ERM. MSML 2020: 328-351 - [c98]Zixiang Chen, Yuan Cao, Quanquan Gu, Tong Zhang:
A Generalized Neural Tangent Kernel Analysis for Two-layer Neural Networks. NeurIPS 2020 - [c97]Spencer Frei, Yuan Cao, Quanquan Gu:
Agnostic Learning of a Single Neuron with Gradient Descent. NeurIPS 2020 - [c96]Yue Wu, Weitong Zhang, Pan Xu, Quanquan Gu:
A Finite-Time Analysis of Two Time-Scale Actor-Critic Methods. NeurIPS 2020 - [c95]Fabrice Harel-Canada, Lingxiao Wang, Muhammad Ali Gulzar, Quanquan Gu, Miryung Kim:
Is neuron coverage a meaningful measure for testing deep neural networks? ESEC/SIGSOFT FSE 2020: 851-862 - [i59]Zixiang Chen, Yuan Cao, Quanquan Gu, Tong Zhang:
Mean-Field Analysis of Two-Layer Neural Networks: Non-Asymptotic Rates and Generalization Bounds. CoRR abs/2002.04026 (2020) - [i58]Tianyuan Jin, Pan Xu, Xiaokui Xiao, Quanquan Gu:
Double Explore-then-Commit: Asymptotic Optimality and Beyond. CoRR abs/2002.09174 (2020) - [i57]Xiao Zhang, Jinghui Chen, Quanquan Gu, David Evans:
Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models. CoRR abs/2003.00378 (2020) - [i56]Difan Zou, Philip M. Long, Quanquan Gu:
On the Global Convergence of Training Deep Linear ResNets. CoRR abs/2003.01094 (2020) - [i55]Tianyuan Jin, Pan Xu, Jieming Shi, Xiaokui Xiao, Quanquan Gu:
MOTS: Minimax Optimal Thompson Sampling. CoRR abs/2003.01803 (2020) - [i54]Zhicong Liang, Bao Wang, Quanquan Gu, Stanley J. Osher, Yuan Yao:
Exploring Private Federated Learning with Laplacian Smoothing. CoRR abs/2005.00218 (2020) - [i53]Yue Wu, Weitong Zhang
, Pan Xu, Quanquan Gu:
A Finite Time Analysis of Two Time-Scale Actor Critic Methods. CoRR abs/2005.01350 (2020) - [i52]Bargav Jayaraman, Lingxiao Wang, David Evans, Quanquan Gu:
Revisiting Membership Inference Under Realistic Assumptions. CoRR abs/2005.10881 (2020) - [i51]Spencer Frei, Yuan Cao, Quanquan Gu:
Agnostic Learning of a Single Neuron with Gradient Descent. CoRR abs/2005.14426 (2020) - [i50]Yonatan Dukler, Quanquan Gu, Guido Montúfar
:
Optimization Theory for ReLU Neural Networks Trained with Normalization Layers. CoRR abs/2006.06878 (2020) - [i49]Jinghui Chen, Quanquan Gu:
RayS: A Ray Searching Method for Hard-label Adversarial Attack. CoRR abs/2006.12792 (2020) - [i48]