


Остановите войну!
for scientists:
Haipeng Luo
Person information

Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2022
- [c57]Liyu Chen, Haipeng Luo, Aviv Rosenberg:
Policy Optimization for Stochastic Shortest Path. COLT 2022: 982-1046 - [c56]Haipeng Luo, Mengxiao Zhang, Peng Zhao:
Adaptive Bandit Convex Optimization with Heterogeneous Curvature. COLT 2022: 1576-1612 - [c55]Haipeng Luo, Mengxiao Zhang, Peng Zhao, Zhi-Hua Zhou:
Corralling a Larger Band of Bandits: A Case Study on Switching Regret for Linear Bandits. COLT 2022: 3635-3684 - [c54]Liyu Chen, Rahul Jain, Haipeng Luo:
Improved No-Regret Algorithms for Stochastic Shortest Path with Linear MDP. ICML 2022: 3204-3245 - [c53]Liyu Chen, Rahul Jain, Haipeng Luo:
Learning Infinite-horizon Average-reward Markov Decision Process with Constraints. ICML 2022: 3246-3270 - [c52]Gabriele Farina, Chung-Wei Lee, Haipeng Luo, Christian Kroer:
Kernelized Multiplicative Weights for 0/1-Polyhedral Games: Bridging the Gap Between Learning in Extensive-Form and Normal-Form Games. ICML 2022: 6337-6357 - [c51]Mengxiao Zhang, Peng Zhao, Haipeng Luo, Zhi-Hua Zhou:
No-Regret Learning in Time-Varying Zero-Sum Games. ICML 2022: 26772-26808 - [i60]Mengxiao Zhang, Peng Zhao, Haipeng Luo, Zhi-Hua Zhou:
No-Regret Learning in Time-Varying Zero-Sum Games. CoRR abs/2201.12736 (2022) - [i59]Tiancheng Jin, Tal Lancewicki, Haipeng Luo, Yishay Mansour, Aviv Rosenberg:
Near-Optimal Regret for Adversarial MDP with Delayed Bandit Feedback. CoRR abs/2201.13172 (2022) - [i58]Liyu Chen, Rahul Jain, Haipeng Luo:
Learning Infinite-Horizon Average-Reward Markov Decision Processes with Constraints. CoRR abs/2202.00150 (2022) - [i57]Gabriele Farina, Chung-Wei Lee, Haipeng Luo, Christian Kroer:
Kernelized Multiplicative Weights for 0/1-Polyhedral Games: Bridging the Gap Between Learning in Extensive-Form and Normal-Form Games. CoRR abs/2202.00237 (2022) - [i56]Liyu Chen, Haipeng Luo, Aviv Rosenberg:
Policy Optimization for Stochastic Shortest Path. CoRR abs/2202.03334 (2022) - [i55]Haipeng Luo, Mengxiao Zhang, Peng Zhao:
Adaptive Bandit Convex Optimization with Heterogeneous Curvature. CoRR abs/2202.06150 (2022) - [i54]Haipeng Luo, Mengxiao Zhang, Peng Zhao, Zhi-Hua Zhou:
Corralling a Larger Band of Bandits: A Case Study on Switching Regret for Linear Bandits. CoRR abs/2202.06151 (2022) - [i53]Ioannis Anagnostides, Gabriele Farina, Christian Kroer, Chung-Wei Lee, Haipeng Luo, Tuomas Sandholm:
Uncoupled Learning Dynamics with O(log T) Swap Regret in Multiplayer Games. CoRR abs/2204.11417 (2022) - [i52]Liyu Chen, Haipeng Luo:
Near-Optimal Goal-Oriented Reinforcement Learning in Non-Stationary Environments. CoRR abs/2205.13044 (2022) - [i51]Yan Dai, Haipeng Luo, Liyu Chen:
Follow-the-Perturbed-Leader for Adversarial Markov Decision Processes with Bandit Feedback. CoRR abs/2205.13451 (2022) - [i50]Gabriele Farina, Ioannis Anagnostides, Haipeng Luo, Chung-Wei Lee, Christian Kroer, Tuomas Sandholm:
Near-Optimal No-Regret Learning for General Convex Games. CoRR abs/2206.08742 (2022) - 2021
- [c50]Yining Chen, Haipeng Luo, Tengyu Ma, Chicheng Zhang:
Active Online Learning with Hidden Shifting Domains. AISTATS 2021: 2053-2061 - [c49]Chen-Yu Wei, Mehdi Jafarnia-Jahromi, Haipeng Luo, Rahul Jain:
Learning Infinite-horizon Average-reward MDPs with Linear Function Approximation. AISTATS 2021: 3007-3015 - [c48]Ehsan Emamjomeh-Zadeh, Chen-Yu Wei, Haipeng Luo, David Kempe:
Adversarial Online Learning with Changing Action Sets: Efficient Algorithms with Approximate Regret Bounds. ALT 2021: 599-618 - [c47]Liyu Chen, Haipeng Luo, Chen-Yu Wei:
Minimax Regret for Stochastic Shortest Path with Adversarial Costs and Known Transition. COLT 2021: 1180-1215 - [c46]Liyu Chen, Haipeng Luo, Chen-Yu Wei:
Impossible Tuning Made Possible: A New Expert Algorithm and Its Applications. COLT 2021: 1216-1259 - [c45]Chen-Yu Wei, Chung-Wei Lee, Mengxiao Zhang, Haipeng Luo:
Last-iterate Convergence of Decentralized Optimistic Gradient Descent/Ascent in Infinite-horizon Competitive Markov Games. COLT 2021: 4259-4299 - [c44]Chen-Yu Wei, Haipeng Luo:
Non-stationary Reinforcement Learning without Prior Knowledge: an Optimal Black-box Approach. COLT 2021: 4300-4354 - [c43]Chen-Yu Wei, Chung-Wei Lee, Mengxiao Zhang, Haipeng Luo:
Linear Last-iterate Convergence in Constrained Saddle-point Optimization. ICLR 2021 - [c42]Liyu Chen, Haipeng Luo:
Finding the Stochastic Shortest Path with Low Regret: the Adversarial Cost and Unknown Transition Case. ICML 2021: 1651-1660 - [c41]Chung-Wei Lee, Haipeng Luo, Chen-Yu Wei, Mengxiao Zhang, Xiaojin Zhang:
Achieving Near Instance-Optimality and Minimax-Optimality in Stochastic and Adversarial Linear Bandits Simultaneously. ICML 2021: 6142-6151 - [c40]Daniel Jiang, Haipeng Luo, Chu Wang, Yingfei Wang:
Multi-Armed Bandits and Reinforcement Learning: Advancing Decision Making in E-Commerce and Beyond. KDD 2021: 4133-4134 - [c39]Liyu Chen, Mehdi Jafarnia-Jahromi, Rahul Jain, Haipeng Luo:
Implicit Finite-Horizon Approximation and Efficient Optimal Algorithms for Stochastic Shortest Path. NeurIPS 2021: 10849-10861 - [c38]Chung-Wei Lee, Christian Kroer, Haipeng Luo:
Last-iterate Convergence in Extensive-Form Games. NeurIPS 2021: 14293-14305 - [c37]Tiancheng Jin, Longbo Huang, Haipeng Luo:
The best of both worlds: stochastic and adversarial episodic MDPs with unknown transition. NeurIPS 2021: 20491-20502 - [c36]Haipeng Luo, Chen-Yu Wei, Chung-Wei Lee:
Policy Optimization in Adversarial MDPs: Improved Exploration via Dilated Bonuses. NeurIPS 2021: 22931-22942 - [i49]Liyu Chen, Haipeng Luo, Chen-Yu Wei:
Impossible Tuning Made Possible: A New Expert Algorithm and Its Applications. CoRR abs/2102.01046 (2021) - [i48]Chen-Yu Wei, Chung-Wei Lee, Mengxiao Zhang, Haipeng Luo:
Last-iterate Convergence of Decentralized Optimistic Gradient Descent/Ascent in Infinite-horizon Competitive Markov Games. CoRR abs/2102.04540 (2021) - [i47]Liyu Chen, Haipeng Luo:
Finding the Stochastic Shortest Path with Low Regret: The Adversarial Cost and Unknown Transition Case. CoRR abs/2102.05284 (2021) - [i46]Chen-Yu Wei, Haipeng Luo:
Non-stationary Reinforcement Learning without Prior Knowledge: An Optimal Black-box Approach. CoRR abs/2102.05406 (2021) - [i45]Chung-Wei Lee, Haipeng Luo, Chen-Yu Wei, Mengxiao Zhang, Xiaojin Zhang:
Achieving Near Instance-Optimality and Minimax-Optimality in Stochastic and Adversarial Linear Bandits Simultaneously. CoRR abs/2102.05858 (2021) - [i44]Tiancheng Jin, Longbo Huang, Haipeng Luo:
The best of both worlds: stochastic and adversarial episodic MDPs with unknown transition. CoRR abs/2106.04117 (2021) - [i43]Mehdi Jafarnia-Jahromi, Liyu Chen, Rahul Jain, Haipeng Luo:
Online Learning for Stochastic Shortest Path Model via Posterior Sampling. CoRR abs/2106.05335 (2021) - [i42]Liyu Chen, Mehdi Jafarnia-Jahromi, Rahul Jain, Haipeng Luo:
Implicit Finite-Horizon Approximation and Efficient Optimal Algorithms for Stochastic Shortest Path. CoRR abs/2106.08377 (2021) - [i41]Chung-Wei Lee, Christian Kroer, Haipeng Luo:
Last-iterate Convergence in Extensive-Form Games. CoRR abs/2106.14326 (2021) - [i40]Haipeng Luo, Chen-Yu Wei, Chung-Wei Lee:
Policy Optimization in Adversarial MDPs: Improved Exploration via Dilated Bonuses. CoRR abs/2107.08346 (2021) - [i39]Liyu Chen, Rahul Jain, Haipeng Luo:
Improved No-Regret Algorithms for Stochastic Shortest Path with Linear MDP. CoRR abs/2112.09859 (2021) - 2020
- [j13]Miroslav Dudík, Nika Haghtalab, Haipeng Luo, Robert E. Schapire, Vasilis Syrgkanis, Jennifer Wortman Vaughan:
Oracle-efficient Online Learning and Auction Design. J. ACM 67(5): 26:1-26:57 (2020) - [c35]Yifang Chen, Alex Cuellar, Haipeng Luo, Jignesh Modi, Heramb Nemlekar, Stefanos Nikolaidis:
The Fair Contextual Multi-Armed Bandit. AAMAS 2020: 1810-1812 - [c34]Chung-Wei Lee, Haipeng Luo, Mengxiao Zhang:
A Closer Look at Small-loss Bounds for Bandits with Graph Feedback. COLT 2020: 2516-2564 - [c33]Chen-Yu Wei, Haipeng Luo, Alekh Agarwal:
Taking a hint: How to leverage loss predictors in contextual bandits? COLT 2020: 3583-3634 - [c32]Dylan J. Foster, Akshay Krishnamurthy, Haipeng Luo:
Open Problem: Model Selection for Contextual Bandits. COLT 2020: 3842-3846 - [c31]Chi Jin, Tiancheng Jin, Haipeng Luo, Suvrit Sra, Tiancheng Yu:
Learning Adversarial Markov Decision Processes with Bandit Feedback and Unknown Transition. ICML 2020: 4860-4869 - [c30]Chen-Yu Wei, Mehdi Jafarnia-Jahromi, Haipeng Luo, Hiteshi Sharma, Rahul Jain:
Model-free Reinforcement Learning in Infinite-horizon Average-reward Markov Decision Processes. ICML 2020: 10170-10180 - [c29]Dirk van der Hoeven, Ashok Cutkosky
, Haipeng Luo:
Comparator-Adaptive Convex Bandits. NeurIPS 2020 - [c28]Tiancheng Jin, Haipeng Luo:
Simultaneously Learning Stochastic and Adversarial Episodic MDPs with Known Transition. NeurIPS 2020 - [c27]Chung-Wei Lee, Haipeng Luo, Chen-Yu Wei, Mengxiao Zhang:
Bias no more: high-probability data-dependent regret bounds for adversarial bandits and MDPs. NeurIPS 2020 - [c26]Yifang Chen, Alex Cuellar, Haipeng Luo, Jignesh Modi, Heramb Nemlekar, Stefanos Nikolaidis:
Fair Contextual Multi-Armed Bandits: Theory and Experiments. UAI 2020: 181-190 - [i38]Chung-Wei Lee, Haipeng Luo, Mengxiao Zhang:
A Closer Look at Small-loss Bounds for Bandits with Graph Feedback. CoRR abs/2002.00315 (2020) - [i37]Chen-Yu Wei, Haipeng Luo, Alekh Agarwal:
Taking a hint: How to leverage loss predictors in contextual bandits? CoRR abs/2003.01922 (2020) - [i36]Ehsan Emamjomeh-Zadeh, Chen-Yu Wei, Haipeng Luo, David Kempe:
Adversarial Online Learning with Changing Action Sets: Efficient Algorithms with Approximate Regret Bounds. CoRR abs/2003.03490 (2020) - [i35]Mehdi Jafarnia-Jahromi, Chen-Yu Wei, Rahul Jain, Haipeng Luo:
A Model-free Learning Algorithm for Infinite-horizon Average-reward MDPs with Near-optimal Regret. CoRR abs/2006.04354 (2020) - [i34]Tiancheng Jin, Haipeng Luo:
Simultaneously Learning Stochastic and Adversarial Episodic MDPs with Known Transition. CoRR abs/2006.05606 (2020) - [i33]Chung-Wei Lee, Haipeng Luo, Chen-Yu Wei, Mengxiao Zhang:
Bias no more: high-probability data-dependent regret bounds for adversarial bandits and MDPs. CoRR abs/2006.08040 (2020) - [i32]Chung-Wei Lee, Haipeng Luo, Chen-Yu Wei, Mengxiao Zhang:
Linear Last-iterate Convergence for Matrix Games and Stochastic Games. CoRR abs/2006.09517 (2020) - [i31]Dylan J. Foster, Akshay Krishnamurthy, Haipeng Luo:
Open Problem: Model Selection for Contextual Bandits. CoRR abs/2006.10940 (2020) - [i30]Yining Chen, Haipeng Luo, Tengyu Ma, Chicheng Zhang:
Active Online Domain Adaptation. CoRR abs/2006.14481 (2020) - [i29]Dirk van der Hoeven, Ashok Cutkosky, Haipeng Luo:
Comparator-adaptive Convex Bandits. CoRR abs/2007.08448 (2020) - [i28]Chen-Yu Wei, Mehdi Jafarnia-Jahromi, Haipeng Luo, Rahul Jain:
Learning Infinite-horizon Average-reward MDPs with Linear Function Approximation. CoRR abs/2007.11849 (2020) - [i27]Liyu Chen, Haipeng Luo, Chen-Yu Wei:
Minimax Regret for Stochastic Shortest Path with Adversarial Costs and Known Transition. CoRR abs/2012.04053 (2020)
2010 – 2019
- 2019
- [c25]Peter Auer, Yifang Chen, Pratik Gajane, Chung-Wei Lee, Haipeng Luo, Ronald Ortner, Chen-Yu Wei:
Achieving Optimal Dynamic Regret for Non-stationary Bandits without Prior Information. COLT 2019: 159-163 - [c24]Sébastien Bubeck, Yuanzhi Li, Haipeng Luo, Chen-Yu Wei:
Improved Path-length Regret Bounds for Bandits. COLT 2019: 508-528 - [c23]Yifang Chen, Chung-Wei Lee, Haipeng Luo, Chen-Yu Wei:
A New Algorithm for Non-stationary Contextual Bandits: Efficient, Optimal and Parameter-free. COLT 2019: 696-726 - [c22]Julian Zimmert, Haipeng Luo, Chen-Yu Wei:
Beating Stochastic and Adversarial Semi-bandits Optimally and Simultaneously. ICML 2019: 7683-7692 - [c21]Kai Zheng, Haipeng Luo, Ilias Diakonikolas, Liwei Wang:
Equipping Experts/Bandits with Long-term Memory. NeurIPS 2019: 5927-5937 - [c20]Dylan J. Foster, Spencer Greenberg, Satyen Kale, Haipeng Luo, Mehryar Mohri, Karthik Sridharan:
Hypothesis Set Stability and Generalization. NeurIPS 2019: 6726-6736 - [c19]Dylan J. Foster, Akshay Krishnamurthy, Haipeng Luo:
Model Selection for Contextual Bandits. NeurIPS 2019: 14714-14725 - [i26]Julian Zimmert, Haipeng Luo, Chen-Yu Wei:
Beating Stochastic and Adversarial Semi-bandits Optimally and Simultaneously. CoRR abs/1901.08779 (2019) - [i25]Sébastien Bubeck, Yuanzhi Li, Haipeng Luo, Chen-Yu Wei:
Improved Path-length Regret Bounds for Bandits. CoRR abs/1901.10604 (2019) - [i24]Yifang Chen, Chung-Wei Lee, Haipeng Luo, Chen-Yu Wei:
A New Algorithm for Non-stationary Contextual Bandits: Efficient, Optimal, and Parameter-free. CoRR abs/1902.00980 (2019) - [i23]Dylan J. Foster, Spencer Greenberg, Satyen Kale, Haipeng Luo, Mehryar Mohri, Karthik Sridharan:
Hypothesis Set Stability and Generalization. CoRR abs/1904.04755 (2019) - [i22]Kai Zheng, Haipeng Luo, Ilias Diakonikolas, Liwei Wang:
Equipping Experts/Bandits with Long-term Memory. CoRR abs/1905.12950 (2019) - [i21]Dylan J. Foster, Akshay Krishnamurthy, Haipeng Luo:
Model selection for contextual bandits. CoRR abs/1906.00531 (2019) - [i20]Chen-Yu Wei, Mehdi Jafarnia-Jahromi, Haipeng Luo, Hiteshi Sharma, Rahul Jain:
Model-free Reinforcement Learning in Infinite-horizon Average-reward Markov Decision Processes. CoRR abs/1910.07072 (2019) - [i19]Tiancheng Jin, Haipeng Luo:
Learning Adversarial MDPs with Bandit Feedback and Unknown Transition. CoRR abs/1912.01192 (2019) - [i18]Yifang Chen, Alex Cuellar, Haipeng Luo, Jignesh Modi, Heramb Nemlekar, Stefanos Nikolaidis:
Fair Contextual Multi-Armed Bandits: Theory and Experiments. CoRR abs/1912.08055 (2019) - 2018
- [c18]Dylan J. Foster, Satyen Kale, Haipeng Luo, Mehryar Mohri, Karthik Sridharan:
Logistic Regression: The Importance of Being Improper. COLT 2018: 167-208 - [c17]Chen-Yu Wei, Haipeng Luo:
More Adaptive Algorithms for Adversarial Bandits. COLT 2018: 1263-1291 - [c16]Haipeng Luo, Chen-Yu Wei, Alekh Agarwal, John Langford:
Efficient Contextual Bandits in Non-stationary Worlds. COLT 2018: 1739-1776 - [c15]Dylan J. Foster, Alekh Agarwal, Miroslav Dudík, Haipeng Luo, Robert E. Schapire:
Practical Contextual Bandits with Regression Oracles. ICML 2018: 1534-1543 - [c14]Haipeng Luo, Chen-Yu Wei, Kai Zheng:
Efficient Online Portfolio with Logarithmic Regret. NeurIPS 2018: 8245-8255 - [i17]Chen-Yu Wei, Haipeng Luo:
More Adaptive Algorithms for Adversarial Bandits. CoRR abs/1801.03265 (2018) - [i16]Dylan J. Foster, Alekh Agarwal, Miroslav Dudík, Haipeng Luo, Robert E. Schapire:
Practical Contextual Bandits with Regression Oracles. CoRR abs/1803.01088 (2018) - [i15]Dylan J. Foster, Satyen Kale, Haipeng Luo, Mehryar Mohri, Karthik Sridharan:
Logistic Regression: The Importance of Being Improper. CoRR abs/1803.09349 (2018) - [i14]Haipeng Luo, Chen-Yu Wei, Kai Zheng:
Efficient Online Portfolio with Logarithmic Regret. CoRR abs/1805.07430 (2018) - 2017
- [c13]Alekh Agarwal, Akshay Krishnamurthy, John Langford, Haipeng Luo, Robert E. Schapire:
Open Problem: First-Order Regret Bounds for Contextual Bandits. COLT 2017: 4-7 - [c12]Alekh Agarwal, Haipeng Luo, Behnam Neyshabur, Robert E. Schapire:
Corralling a Band of Bandit Algorithms. COLT 2017: 12-38 - [c11]Miroslav Dudík, Nika Haghtalab, Haipeng Luo, Robert E. Schapire, Vasilis Syrgkanis, Jennifer Wortman Vaughan:
Oracle-Efficient Online Learning and Auction Design. FOCS 2017: 528-539 - [i13]Haipeng Luo, Alekh Agarwal, John Langford:
Efficient Contextual Bandits in Non-stationary Worlds. CoRR abs/1708.01799 (2017) - 2016
- [j12]Haipeng Luo
, Ting Chen:
Three-Dimensional Surface Displacement Field Associated with the 25 April 2015 Gorkha, Nepal, Earthquake: Solution from Integrated InSAR and GPS Measurements with an Extended SISTEM Approach. Remote. Sens. 8(7): 559 (2016) - [c10]Elad Hazan, Haipeng Luo:
Variance-Reduced and Projection-Free Stochastic Optimization. ICML 2016: 1263-1271 - [c9]Alina Beygelzimer, Satyen Kale, Haipeng Luo:
Optimal and Adaptive Algorithms for Online Boosting. IJCAI 2016: 4120-4124 - [c8]Haipeng Luo, Alekh Agarwal, Nicolò Cesa-Bianchi, John Langford:
Efficient Second Order Online Learning by Sketching. NIPS 2016: 902-910 - [c7]Vasilis Syrgkanis, Haipeng Luo, Akshay Krishnamurthy, Robert E. Schapire:
Improved Regret Bounds for Oracle-Based Adversarial Contextual Bandits. NIPS 2016: 3135-3143 - [i12]Elad Hazan, Haipeng Luo:
Variance-Reduced and Projection-Free Stochastic Optimization. CoRR abs/1602.02101 (2016) - [i11]Haipeng Luo, Alekh Agarwal, Nicolò Cesa-Bianchi, John Langford:
Efficient Second Order Online Learning via Sketching. CoRR abs/1602.02202 (2016) - [i10]Vasilis Syrgkanis, Haipeng Luo, Akshay Krishnamurthy, Robert E. Schapire:
Improved Regret Bounds for Oracle-Based Adversarial Contextual Bandits. CoRR abs/1606.00313 (2016) - [i9]Miroslav Dudík, Nika Haghtalab, Haipeng Luo, Robert E. Schapire, Vasilis Syrgkanis, Jennifer Wortman Vaughan:
Oracle-Efficient Learning and Auction Design. CoRR abs/1611.01688 (2016) - [i8]Alekh Agarwal, Haipeng Luo, Behnam Neyshabur, Robert E. Schapire:
Corralling a Band of Bandit Algorithms. CoRR abs/1612.06246 (2016) - 2015
- [c6]Haipeng Luo, Robert E. Schapire:
Achieving All with No Parameters: AdaNormalHedge. COLT 2015: 1286-1304 - [c5]Alina Beygelzimer, Satyen Kale, Haipeng Luo:
Optimal and Adaptive Algorithms for Online Boosting. ICML 2015: 2323-2331 - [c4]Alina Beygelzimer, Elad Hazan, Satyen Kale, Haipeng Luo:
Online Gradient Boosting. NIPS 2015: 2458-2466 - [c3]Vasilis Syrgkanis, Alekh Agarwal, Haipeng Luo, Robert E. Schapire:
Fast Convergence of Regularized Learning in Games. NIPS 2015: 2989-2997 - [i7]Alina Beygelzimer, Satyen Kale, Haipeng Luo:
Optimal and Adaptive Algorithms for Online Boosting. CoRR abs/1502.02651 (2015) - [i6]Haipeng Luo, Robert E. Schapire:
Achieving All with No Parameters: Adaptive NormalHedge. CoRR abs/1502.05934 (2015) - [i5]Alina Beygelzimer, Elad Hazan, Satyen Kale, Haipeng Luo:
Online Gradient Boosting. CoRR abs/1506.04820 (2015) - [i4]Vasilis Syrgkanis, Alekh Agarwal, Haipeng Luo, Robert E. Schapire:
Fast Convergence of Regularized Learning in Games. CoRR abs/1507.00407 (2015) - 2014
- [j11]Zhen Xiao, Qi Chen, Haipeng Luo:
Automatic Scaling of Internet Applications for Cloud Computing Services. IEEE Trans. Computers 63(5): 1111-1123 (2014) - [j10]Weijia Song, Zhen Xiao, Qi Chen, Haipeng Luo:
Adaptive Resource Provisioning for the Cloud Using Online Bin Packing. IEEE Trans. Computers 63(11): 2647-2660 (2014) - [c2]Haipeng Luo, Robert E. Schapire:
Towards Minimax Online Learning with Unknown Time Horizon. ICML 2014: 226-234 - [c1]Haipeng Luo, Robert E. Schapire:
A Drifting-Games Analysis for Online Learning and Applications to Boosting. NIPS 2014: 1368-1376 - [i3]Haipeng Luo, Robert E. Schapire:
A Drifting-Games Analysis for Online Learning and Applications to Boosting. CoRR abs/1406.1856 (2014) - [i2]Haipeng Luo, Patrick Haffner, Jean-François Paiement:
Accelerated Parallel Optimization Methods for Large Scale Machine Learning. CoRR abs/1411.6725 (2014) - 2013
- [i1]Haipeng Luo, Robert E. Schapire:
Online Learning with Unknown Time Horizon. CoRR abs/1307.8187 (2013) - 2010
- [j9]Dianhua Wu, Minquan Cheng, Zhilin Chen, Haipeng Luo:
The existence of balanced (υ, {3, 6}, 1) difference families. Sci. China Inf. Sci. 53(8): 1584-1590 (2010) - [j8]Kang Wu, Wenlong Su, Haipeng Luo, Xiaodong Xu:
A Generalization of Generalized Paley Graphs and New Lower Bounds for R(3,q). Electron. J. Comb. 17(1) (2010) - [j7]Xiaodong Xu, Haipeng Luo, Zehui Shao:
Upper and lower bounds for Fv(4,4;5). Electron. J. Comb. 17(1) (2010)
2000 – 2009
- 2009
- [j6]Kang Wu, Wenlong Su, Haipeng Luo, Xiaodong Xu:
New lower bounds for seven classical Ramsey numbers R(3, q). Appl. Math. Lett. 22(3): 365-368 (2009) - 2003
- [j5]Guiqing Li, Wenlong Su, Haipeng Luo:
Edge colorings of the complete graph K149 and the lower bounds of three Ramsey numbers. Discret. Appl. Math. 126(2-3): 167-179 (2003) - 2002
- [j4]Haipeng Luo, Wenlong Su, Zhenchong Li:
The properties of self-complementary graphs and new lower bounds for diagonal Ramsey numbers. Australas. J Comb. 25: 103-116 (2002) - [j3]