default search action
Jingfeng Wu
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
Journal Articles
- 2024
- [j3]Andrea Soltoggio, Eseoghene Ben-Iwhiwhu, Vladimir Braverman, Eric Eaton, Benjamin Epstein, Yunhao Ge, Lucy Halperin, Jonathan P. How, Laurent Itti, Michael A. Jacobs, Pavan Kantharaju, Long Le, Steven Lee, Xinran Liu, Sildomar T. Monteiro, David Musliner, Saptarshi Nath, Priyadarshini Panda, Christos Peridis, Hamed Pirsiavash, Vishwa S. Parekh, Kaushik Roy, Shahaf S. Shperberg, Hava T. Siegelmann, Peter Stone, Kyle Vedder, Jingfeng Wu, Lin Yang, Guangyao Zheng, Soheil Kolouri:
A collective AI via lifelong learning and sharing at the edge. Nat. Mac. Intell. 6(3): 251-264 (2024) - 2023
- [j2]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Benign Overfitting of Constant-Stepsize SGD for Linear Regression. J. Mach. Learn. Res. 24: 326:1-326:58 (2023) - 2017
- [j1]Jingfeng Wu, Weidong Jin, Peng Tang:
数据异常的监测技术综述 (Survey on Monitoring Techniques for Data Abnormalities). 计算机科学 44(Z11): 24-28 (2017)
Conference and Workshop Papers
- 2024
- [c25]Jingfeng Wu, Peter L. Bartlett, Matus Telgarsky, Bin Yu:
Large Stepsize Gradient Descent for Logistic Loss: Non-Monotonicity of the Loss Improves Optimization Efficiency. COLT 2024: 5019-5073 - [c24]Xuheng Li, Yihe Deng, Jingfeng Wu, Dongruo Zhou, Quanquan Gu:
Risk Bounds of Accelerated SGD for Overparameterized Linear Regression. ICLR 2024 - [c23]Jingfeng Wu, Difan Zou, Zixiang Chen, Vladimir Braverman, Quanquan Gu, Peter L. Bartlett:
How Many Pretraining Tasks Are Needed for In-Context Learning of Linear Regression? ICLR 2024 - 2023
- [c22]Haoran Li, Jingfeng Wu, Vladimir Braverman:
Fixed Design Analysis of Regularization-Based Continual Learning. CoLLAs 2023: 513-533 - [c21]Jingfeng Wu, Difan Zou, Zixiang Chen, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Finite-Sample Analysis of Learning High-Dimensional Single ReLU Neuron. ICML 2023: 37919-37951 - [c20]Jingfeng Wu, Vladimir Braverman, Jason D. Lee:
Implicit Bias of Gradient Descent for Logistic Regression at the Edge of Stability. NeurIPS 2023 - [c19]Jingfeng Wu, Wennan Zhu, Peter Kairouz, Vladimir Braverman:
Private Federated Frequency Estimation: Adapting to the Hardness of the Instance. NeurIPS 2023 - 2022
- [c18]Jingfeng Wu, Vladimir Braverman, Lin Yang:
Gap-Dependent Unsupervised Exploration for Reinforcement Learning. AISTATS 2022: 4109-4131 - [c17]Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Last Iterate Risk Bounds of SGD with Decaying Stepsize for Overparameterized Linear Regression. ICML 2022: 24280-24314 - [c16]Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
The Power and Limitation of Pretraining-Finetuning for Linear Regression under Covariate Shift. NeurIPS 2022 - [c15]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Risk Bounds of Multi-Pass SGD for Least Squares in the Interpolation Regime. NeurIPS 2022 - 2021
- [c14]Haoran Li, Aditya Krishnan, Jingfeng Wu, Soheil Kolouri, Praveen K. Pilly, Vladimir Braverman:
Lifelong Learning with Sketched Structural Regularization. ACML 2021: 985-1000 - [c13]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Benign Overfitting of Constant-Stepsize SGD for Linear Regression. COLT 2021: 4633-4635 - [c12]Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu:
Direction Matters: On the Implicit Bias of Stochastic Gradient Descent with Moderate Learning Rate. ICLR 2021 - [c11]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Dean P. Foster, Sham M. Kakade:
The Benefits of Implicit Regularization from SGD in Least Squares Problems. NeurIPS 2021: 5456-5468 - [c10]Jingfeng Wu, Vladimir Braverman, Lin Yang:
Accommodating Picky Customers: Regret Bound and Exploration Complexity for Multi-Objective Reinforcement Learning. NeurIPS 2021: 13112-13124 - [c9]Zhuolong Yu, Jingfeng Wu, Vladimir Braverman, Ion Stoica, Xin Jin:
Twenty Years After: Hierarchical Core-Stateless Fair Queueing. NSDI 2021: 29-45 - [c8]Jie You, Jingfeng Wu, Xin Jin, Mosharaf Chowdhury:
Ship Compute or Ship Data? Why Not Both? NSDI 2021: 633-651 - [c7]Zhuolong Yu, Chuheng Hu, Jingfeng Wu, Xiao Sun, Vladimir Braverman, Mosharaf Chowdhury, Zhenhua Liu, Xin Jin:
Programmable packet scheduling with a single queue. SIGCOMM 2021: 179-193 - 2020
- [c6]Jingfeng Wu, Vladimir Braverman, Lin Yang:
Obtaining Adjustable Regularization for Free via Iterate Averaging. ICML 2020: 10344-10354 - [c5]Jingfeng Wu, Wenqing Hu, Haoyi Xiong, Jun Huan, Vladimir Braverman, Zhanxing Zhu:
On the Noisy Gradient Descent that Generalizes as SGD. ICML 2020: 10367-10376 - 2019
- [c4]Bing Yu, Jingfeng Wu, Jinwen Ma, Zhanxing Zhu:
Tangent-Normal Adversarial Regularization for Semi-Supervised Learning. CVPR 2019: 10676-10684 - [c3]Jie An, Jingfeng Wu, Jinwen Ma:
Automatic Cloud Segmentation Based on Fused Fully Convolutional Networks. ICIC (1) 2019: 520-528 - [c2]Zhanxing Zhu, Jingfeng Wu, Bing Yu, Lei Wu, Jinwen Ma:
The Anisotropic Noise in Stochastic Gradient Descent: Its Behavior of Escaping from Sharp Minima and Regularization Effects. ICML 2019: 7654-7663 - 2016
- [c1]Bo Chen, Xiu-e Gao, Qingguo Zheng, Jingfeng Wu:
Research on human body composition prediction model based on Akaike Information Criterion and improved entropy method. CISP-BMEI 2016: 1882-1886
Informal and Other Publications
- 2024
- [i25]Ruiqi Zhang, Jingfeng Wu, Peter L. Bartlett:
In-Context Learning of a Linear Transformer Block: Benefits of the MLP Component and One-Step GD Initialization. CoRR abs/2402.14951 (2024) - [i24]Jingfeng Wu, Peter L. Bartlett, Matus Telgarsky, Bin Yu:
Large Stepsize Gradient Descent for Logistic Loss: Non-Monotonicity of the Loss Improves Optimization Efficiency. CoRR abs/2402.15926 (2024) - [i23]Licong Lin, Jingfeng Wu, Sham M. Kakade, Peter L. Bartlett, Jason D. Lee:
Scaling Laws in Linear Regression: Compute, Parameters, and Data. CoRR abs/2406.08466 (2024) - [i22]Yuhang Cai, Jingfeng Wu, Song Mei, Michael Lindsey, Peter L. Bartlett:
Large Stepsize Gradient Descent for Non-Homogeneous Two-Layer Networks: Margin Improvement and Fast Optimization. CoRR abs/2406.08654 (2024) - [i21]Jingfeng Wu, Minxian Xu, Yiyuan He, Kejiang Ye, Chengzhong Xu:
CloudNativeSim: a toolkit for modeling and simulation of cloud-native applications. CoRR abs/2409.05093 (2024) - [i20]Yiyuan He, Minxian Xu, Jingfeng Wu, Wanyi Zheng, Kejiang Ye, Cheng-Zhong Xu:
UELLM: A Unified and Efficient Approach for LLM Inference Serving. CoRR abs/2409.14961 (2024) - 2023
- [i19]Jingfeng Wu, Difan Zou, Zixiang Chen, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Learning High-Dimensional Single-Neuron ReLU Networks with Finite Samples. CoRR abs/2303.02255 (2023) - [i18]Haoran Li, Jingfeng Wu, Vladimir Braverman:
Fixed Design Analysis of Regularization-Based Continual Learning. CoRR abs/2303.10263 (2023) - [i17]Jingfeng Wu, Vladimir Braverman, Jason D. Lee:
Implicit Bias of Gradient Descent for Logistic Regression at the Edge of Stability. CoRR abs/2305.11788 (2023) - [i16]Jingfeng Wu, Wennan Zhu, Peter Kairouz, Vladimir Braverman:
Private Federated Frequency Estimation: Adapting to the Hardness of the Instance. CoRR abs/2306.09396 (2023) - [i15]Jingfeng Wu, Difan Zou, Zixiang Chen, Vladimir Braverman, Quanquan Gu, Peter L. Bartlett:
How Many Pretraining Tasks Are Needed for In-Context Learning of Linear Regression? CoRR abs/2310.08391 (2023) - [i14]Xuheng Li, Yihe Deng, Jingfeng Wu, Dongruo Zhou, Quanquan Gu:
Risk Bounds of Accelerated SGD for Overparameterized Linear Regression. CoRR abs/2311.14222 (2023) - 2022
- [i13]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Risk Bounds of Multi-Pass SGD for Least Squares in the Interpolation Regime. CoRR abs/2203.03159 (2022) - [i12]Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
The Power and Limitation of Pretraining-Finetuning for Linear Regression under Covariate Shift. CoRR abs/2208.01857 (2022) - 2021
- [i11]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Benign Overfitting of Constant-Stepsize SGD for Linear Regression. CoRR abs/2103.12692 (2021) - [i10]Haoran Li, Aditya Krishnan, Jingfeng Wu, Soheil Kolouri, Praveen K. Pilly, Vladimir Braverman:
Lifelong Learning with Sketched Structural Regularization. CoRR abs/2104.08604 (2021) - [i9]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Dean P. Foster, Sham M. Kakade:
The Benefits of Implicit Regularization from SGD in Least Squares Problems. CoRR abs/2108.04552 (2021) - [i8]Jingfeng Wu, Vladimir Braverman, Lin F. Yang:
Gap-Dependent Unsupervised Exploration for Reinforcement Learning. CoRR abs/2108.05439 (2021) - [i7]Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Last Iterate Risk Bounds of SGD with Decaying Stepsize for Overparameterized Linear Regression. CoRR abs/2110.06198 (2021) - 2020
- [i6]Jingfeng Wu, Vladimir Braverman, Lin F. Yang:
Obtaining Adjustable Regularization for Free via Iterate Averaging. CoRR abs/2008.06736 (2020) - [i5]Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu:
Direction Matters: On the Implicit Regularization Effect of Stochastic Gradient Descent with Moderate Learning Rate. CoRR abs/2011.02538 (2020) - [i4]Jingfeng Wu, Vladimir Braverman, Lin F. Yang:
Accommodating Picky Customers: Regret Bound and Exploration Complexity for Multi-Objective Reinforcement Learning. CoRR abs/2011.13034 (2020) - 2019
- [i3]Jingfeng Wu, Wenqing Hu, Haoyi Xiong, Jun Huan, Zhanxing Zhu:
The Multiplicative Noise in Stochastic Gradient Descent: Data-Dependent Regularization, Continuous and Discrete Approximation. CoRR abs/1906.07405 (2019) - 2018
- [i2]Zhanxing Zhu, Jingfeng Wu, Bing Yu, Lei Wu, Jinwen Ma:
The Regularization Effects of Anisotropic Noise in Stochastic Gradient Descent. CoRR abs/1803.00195 (2018) - [i1]Bing Yu, Jingfeng Wu, Zhanxing Zhu:
Tangent-Normal Adversarial Regularization for Semi-supervised Learning. CoRR abs/1808.06088 (2018)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-16 21:21 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint