default search action
Pengcheng He
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j6]Tianwei Zhou, Wenwen Zhang, Ben Niu, Pengcheng He, Guanghui Yue:
Parameter Control Framework for Multiobjective Evolutionary Computation Based on Deep Reinforcement Learning. Int. J. Intell. Syst. 2024: 1-17 (2024) - [c53]Xinbei Ma, Yeyun Gong, Pengcheng He, Hai Zhao, Nan Duan:
PROM: A Phrase-level Copying Mechanism with Pre-training for Abstractive Summarization. LREC/COLING 2024: 13103-13119 - [c52]Wen Xiao, Yujia Xie, Giuseppe Carenini, Pengcheng He:
Personalized Abstractive Summarization by Tri-agent Generation Pipeline. EACL (Findings) 2024: 570-581 - [c51]Ming Zhong, Chenxin An, Weizhu Chen, Jiawei Han, Pengcheng He:
Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective. ICLR 2024 - [c50]Yung-Sung Chuang, Yujia Xie, Hongyin Luo, Yoon Kim, James R. Glass, Pengcheng He:
DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models. ICLR 2024 - [c49]Yixiao Li, Yifan Yu, Chen Liang, Nikos Karampatziakis, Pengcheng He, Weizhu Chen, Tuo Zhao:
LoftQ: LoRA-Fine-Tuning-aware Quantization for Large Language Models. ICLR 2024 - [c48]Huangjie Zheng, Zhendong Wang, Jianbo Yuan, Guanghan Ning, Pengcheng He, Quanzeng You, Hongxia Yang, Mingyuan Zhou:
Learning Stackable and Skippable LEGO Bricks for Efficient, Reconfigurable, and Variable-Resolution Diffusion Modeling. ICLR 2024 - [c47]Shujian Zhang, Korawat Tanwisuth, Chengyue Gong, Pengcheng He, Mingyuan Zhou:
Switchable Decision: Dynamic Neural Generation Networks. ICML 2024 - [i59]Shujian Zhang, Korawat Tanwisuth, Chengyue Gong, Pengcheng He, Mingyuan Zhou:
Switchable Decision: Dynamic Neural Generation Networks. CoRR abs/2405.04513 (2024) - 2023
- [j5]Tianwei Zhou, Pengcheng He, Ben Niu, Guanghui Yue, Hong Wang:
A novel competitive constrained dual-archive dual-stage evolutionary algorithm for constrained multiobjective optimization. Swarm Evol. Comput. 83: 101417 (2023) - [c46]Yu Li, Baolin Peng, Pengcheng He, Michel Galley, Zhou Yu, Jianfeng Gao:
DIONYSUS: A Pre-trained Model for Low-Resource Dialogue Summarization. ACL (1) 2023: 1368-1386 - [c45]Pengcheng He, Baolin Peng, Song Wang, Yang Liu, Ruochen Xu, Hany Hassan, Yu Shi, Chenguang Zhu, Wayne Xiong, Michael Zeng, Jianfeng Gao, Xuedong Huang:
Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization. ACL (1) 2023: 5095-5112 - [c44]Xinbei Ma, Yeyun Gong, Pengcheng He, Hai Zhao, Nan Duan:
Query Rewriting in Retrieval-Augmented Large Language Models. EMNLP 2023: 5303-5315 - [c43]Ruochen Xu, Song Wang, Yang Liu, Shuohang Wang, Yichong Xu, Dan Iter, Pengcheng He, Chenguang Zhu, Michael Zeng:
LMGQS: A Large-scale Dataset for Query-focused Summarization. EMNLP (Findings) 2023: 14764-14776 - [c42]Pengcheng He, Jianfeng Gao, Weizhu Chen:
DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing. ICLR 2023 - [c41]Zhendong Wang, Huangjie Zheng, Pengcheng He, Weizhu Chen, Mingyuan Zhou:
Diffusion-GAN: Training GANs with Diffusion. ICLR 2023 - [c40]Qingru Zhang, Minshuo Chen, Alexander Bukharin, Pengcheng He, Yu Cheng, Weizhu Chen, Tuo Zhao:
Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning. ICLR 2023 - [c39]Huangjie Zheng, Pengcheng He, Weizhu Chen, Mingyuan Zhou:
Truncated Diffusion Probabilistic Models and Diffusion-based Adversarial Auto-Encoders. ICLR 2023 - [c38]Yixiao Li, Yifan Yu, Qingru Zhang, Chen Liang, Pengcheng He, Weizhu Chen, Tuo Zhao:
LoSparse: Structured Compression of Large Language Models based on Low-Rank and Sparse Approximation. ICML 2023: 20336-20350 - [c37]Chen Liang, Simiao Zuo, Qingru Zhang, Pengcheng He, Weizhu Chen, Tuo Zhao:
Less is More: Task-aware Layer-wise Distillation for Language Model Compression. ICML 2023: 20852-20867 - [c36]Jason Phang, Yi Mao, Pengcheng He, Weizhu Chen:
HyperTuning: Toward Adapting Large Language Models without Back-propagation. ICML 2023: 27854-27875 - [c35]Korawat Tanwisuth, Shujian Zhang, Huangjie Zheng, Pengcheng He, Mingyuan Zhou:
POUF: Prompt-Oriented Unsupervised Fine-tuning for Large Pre-trained Models. ICML 2023: 33816-33832 - [c34]Zekun Li, Baolin Peng, Pengcheng He, Michel Galley, Jianfeng Gao, Xifeng Yan:
Guiding Large Language Models via Directional Stimulus Prompting. NeurIPS 2023 - [c33]Zhendong Wang, Yifan Jiang, Yadong Lu, Yelong Shen, Pengcheng He, Weizhu Chen, Zhangyang (Atlas) Wang, Mingyuan Zhou:
In-Context Learning Unlocked for Diffusion Models. NeurIPS 2023 - [c32]Zhendong Wang, Yifan Jiang, Huangjie Zheng, Peihao Wang, Pengcheng He, Zhangyang Wang, Weizhu Chen, Mingyuan Zhou:
Patch Diffusion: Faster and More Data-Efficient Training of Diffusion Models. NeurIPS 2023 - [c31]Pengcheng He, Yijia Tang, Fan Xu, Qingjiang Shi:
Cellular Network Optimization Using Unfolding-Based Graph Neural Networks. SPAWC 2023: 551-555 - [i58]Korawat Tanwisuth, Shujian Zhang, Pengcheng He, Mingyuan Zhou:
A Prototype-Oriented Clustering for Domain Shift with Source Privacy. CoRR abs/2302.03807 (2023) - [i57]Zekun Li, Baolin Peng, Pengcheng He, Michel Galley, Jianfeng Gao, Xifeng Yan:
Guiding Large Language Models via Directional Stimulus Prompting. CoRR abs/2302.11520 (2023) - [i56]Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, Jianfeng Gao:
Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback. CoRR abs/2302.12813 (2023) - [i55]Qingru Zhang, Minshuo Chen, Alexander Bukharin, Pengcheng He, Yu Cheng, Weizhu Chen, Tuo Zhao:
Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning. CoRR abs/2303.10512 (2023) - [i54]Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, Jianfeng Gao:
Instruction Tuning with GPT-4. CoRR abs/2304.03277 (2023) - [i53]Zhendong Wang, Yifan Jiang, Huangjie Zheng, Peihao Wang, Pengcheng He, Zhangyang Wang, Weizhu Chen, Mingyuan Zhou:
Patch Diffusion: Faster and More Data-Efficient Training of Diffusion Models. CoRR abs/2304.12526 (2023) - [i52]Korawat Tanwisuth, Shujian Zhang, Huangjie Zheng, Pengcheng He, Mingyuan Zhou:
POUF: Prompt-oriented unsupervised fine-tuning for large pre-trained models. CoRR abs/2305.00350 (2023) - [i51]Zhendong Wang, Yifan Jiang, Yadong Lu, Yelong Shen, Pengcheng He, Weizhu Chen, Zhangyang Wang, Mingyuan Zhou:
In-Context Learning Unlocked for Diffusion Models. CoRR abs/2305.01115 (2023) - [i50]Wen Xiao, Yujia Xie, Giuseppe Carenini, Pengcheng He:
ChatGPT-steered Editing Instructor for Customization of Abstractive Summarization. CoRR abs/2305.02483 (2023) - [i49]Lesly Miculicich, Yujia Xie, Song Wang, Pengcheng He:
Summarization with Precise Length Control. CoRR abs/2305.05171 (2023) - [i48]Xinbei Ma, Yeyun Gong, Pengcheng He, Hai Zhao, Nan Duan:
PROM: A Phrase-level Copying Mechanism with Pre-training for Abstractive Summarization. CoRR abs/2305.06647 (2023) - [i47]Xinbei Ma, Yeyun Gong, Pengcheng He, Hai Zhao, Nan Duan:
Query Rewriting for Retrieval-Augmented Large Language Models. CoRR abs/2305.14283 (2023) - [i46]Yujia Xie, Xun Wang, Si-Qing Chen, Wayne Xiong, Pengcheng He:
Interactive Editing for Text Summarization. CoRR abs/2306.03067 (2023) - [i45]Yixiao Li, Yifan Yu, Qingru Zhang, Chen Liang, Pengcheng He, Weizhu Chen, Tuo Zhao:
LoSparse: Structured Compression of Large Language Models based on Low-Rank and Sparse Approximation. CoRR abs/2306.11222 (2023) - [i44]Sumit Asthana, Sagih Hilleli, Pengcheng He, Aaron Halfaker:
Summaries, Highlights, and Action items: Design, implementation and evaluation of an LLM-powered meeting recap system. CoRR abs/2307.15793 (2023) - [i43]Zekun Li, Baolin Peng, Pengcheng He, Xifeng Yan:
Do you really follow me? Adversarial Instructions for Evaluating the Robustness of Large Language Models. CoRR abs/2308.10819 (2023) - [i42]Alexander Bukharin, Yixiao Li, Pengcheng He, Weizhu Chen, Tuo Zhao:
Deep Reinforcement Learning from Hierarchical Weak Preference Feedback. CoRR abs/2309.02632 (2023) - [i41]Yung-Sung Chuang, Yujia Xie, Hongyin Luo, Yoon Kim, James R. Glass, Pengcheng He:
DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models. CoRR abs/2309.03883 (2023) - [i40]Huangjie Zheng, Zhendong Wang, Jianbo Yuan, Guanghan Ning, Pengcheng He, Quanzeng You, Hongxia Yang, Mingyuan Zhou:
Learning Stackable and Skippable LEGO Bricks for Efficient, Reconfigurable, and Variable-Resolution Diffusion Modeling. CoRR abs/2310.06389 (2023) - [i39]Yixiao Li, Yifan Yu, Chen Liang, Pengcheng He, Nikos Karampatziakis, Weizhu Chen, Tuo Zhao:
LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models. CoRR abs/2310.08659 (2023) - [i38]Ming Zhong, Chenxin An, Weizhu Chen, Jiawei Han, Pengcheng He:
Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective. CoRR abs/2310.11451 (2023) - 2022
- [j4]Jin Sun, Jing Liu, Hao Chen, Pengcheng He, Huihong Yuan, Ze Yan:
Experiences and Lessons Learned From DR Resources Participating in the US and UK Capacity Markets: Mechanisms, Status, Dilemmas and Recommendations. IEEE Access 10: 83851-83868 (2022) - [c30]Chen Liang, Pengcheng He, Yelong Shen, Weizhu Chen, Tuo Zhao:
CAMERO: Consistency Regularized Ensemble of Perturbed Language Models with Weight Sharing. ACL (1) 2022: 7162-7175 - [c29]Chen Liang, Haoming Jiang, Simiao Zuo, Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, Tuo Zhao:
No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models. ICLR 2022 - [c28]Qingru Zhang, Simiao Zuo, Chen Liang, Alexander Bukharin, Pengcheng He, Weizhu Chen, Tuo Zhao:
PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance. ICML 2022: 26809-26823 - [c27]Yichong Xu, Chenguang Zhu, Shuohang Wang, Siqi Sun, Hao Cheng, Xiaodong Liu, Jianfeng Gao, Pengcheng He, Michael Zeng, Xuedong Huang:
Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention. IJCAI 2022: 2762-2768 - [c26]Pengcheng He, Siyuan Lu, Xin Guan, Yibin Kang, Qingjiang Shi:
A Zeroth-Order Block Coordinate Gradient Descent Method For Cellular Network Optimization. ISWCS 2022: 1-6 - [c25]Tianwei Zhou, Wenwen Zhang, Pengcheng He, Guanghui Yue:
A Learned Multi-objective Bacterial Foraging Optimization Algorithm with Continuous Deep Q-Learning. ML4CS (3) 2022: 44-53 - [c24]Zhengbao Jiang, Yi Mao, Pengcheng He, Graham Neubig, Weizhu Chen:
OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering. NAACL-HLT 2022: 932-942 - [c23]Shujian Zhang, Chengyue Gong, Xingchao Liu, Pengcheng He, Weizhu Chen, Mingyuan Zhou:
ALLSH: Active Learning Guided by Local Sensitivity and Hardness. NAACL-HLT (Findings) 2022: 1328-1342 - [c22]Simiao Zuo, Qingru Zhang, Chen Liang, Pengcheng He, Tuo Zhao, Weizhu Chen:
MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation. NAACL-HLT 2022: 1610-1623 - [c21]Tianwei Zhou, Pengcheng He, Churong Zhang, Yichen Lai, Huifen Zhong, Xusheng Wu:
An Improved Particle Swarm Optimization Algorithm for Irregular Flight Recovery Problem. ICSI (1) 2022: 190-200 - [c20]Tianwei Zhou, Wenwen Zhang, Junrui Lu, Pengcheng He, Keqin Yao:
Reinforced Event-Driven Evolutionary Algorithm Based on Double Deep Q-network. ICSI (1) 2022: 294-304 - [i37]Chen Liang, Haoming Jiang, Simiao Zuo, Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, Tuo Zhao:
No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models. CoRR abs/2202.02664 (2022) - [i36]Huangjie Zheng, Pengcheng He, Weizhu Chen, Mingyuan Zhou:
Mixing and Shifting: Exploiting Global and Local Dependencies in Vision MLPs. CoRR abs/2202.06510 (2022) - [i35]Huangjie Zheng, Pengcheng He, Weizhu Chen, Mingyuan Zhou:
Truncated Diffusion Probabilistic Models. CoRR abs/2202.09671 (2022) - [i34]Chen Liang, Pengcheng He, Yelong Shen, Weizhu Chen, Tuo Zhao:
CAMERO: Consistency Regularized Ensemble of Perturbed Language Models with Weight Sharing. CoRR abs/2204.06625 (2022) - [i33]Simiao Zuo, Qingru Zhang, Chen Liang, Pengcheng He, Tuo Zhao, Weizhu Chen:
MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation. CoRR abs/2204.07675 (2022) - [i32]Shujian Zhang, Chengyue Gong, Xingchao Liu, Pengcheng He, Weizhu Chen, Mingyuan Zhou:
ALLSH: Active Learning Guided by Local Sensitivity and Hardness. CoRR abs/2205.04980 (2022) - [i31]Zhendong Wang, Huangjie Zheng, Pengcheng He, Weizhu Chen, Mingyuan Zhou:
Diffusion-GAN: Training GANs with Diffusion. CoRR abs/2206.02262 (2022) - [i30]Baolin Peng, Michel Galley, Pengcheng He, Chris Brockett, Lars Liden, Elnaz Nouri, Zhou Yu, Bill Dolan, Jianfeng Gao:
GODEL: Large-Scale Pre-Training for Goal-Directed Dialog. CoRR abs/2206.11309 (2022) - [i29]Qingru Zhang, Simiao Zuo, Chen Liang, Alexander Bukharin, Pengcheng He, Weizhu Chen, Tuo Zhao:
PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance. CoRR abs/2206.12562 (2022) - [i28]Zhengbao Jiang, Yi Mao, Pengcheng He, Graham Neubig, Weizhu Chen:
OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering. CoRR abs/2207.03637 (2022) - [i27]Pengcheng He, Baolin Peng, Liyang Lu, Song Wang, Jie Mei, Yang Liu, Ruochen Xu, Hany Hassan Awadalla, Yu Shi, Chenguang Zhu, Wayne Xiong, Michael Zeng, Jianfeng Gao, Xuedong Huang:
Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization. CoRR abs/2208.09770 (2022) - [i26]Chen Liang, Simiao Zuo, Qingru Zhang, Pengcheng He, Weizhu Chen, Tuo Zhao:
Less is More: Task-aware Layer-wise Distillation for Language Model Compression. CoRR abs/2210.01351 (2022) - [i25]Jason Phang, Yi Mao, Pengcheng He, Weizhu Chen:
HyperTuning: Toward Adapting Large Language Models without Back-propagation. CoRR abs/2211.12485 (2022) - [i24]Xingxing Zhang, Yiran Liu, Xun Wang, Pengcheng He, Yang Yu, Si-Qing Chen, Wayne Xiong, Furu Wei:
Momentum Calibration for Text Generation. CoRR abs/2212.04257 (2022) - [i23]Yu Li, Baolin Peng, Pengcheng He, Michel Galley, Zhou Yu, Jianfeng Gao:
DIONYSUS: A Pre-trained Model for Low-Resource Dialogue Summarization. CoRR abs/2212.10018 (2022) - [i22]Wen Xiao, Lesly Miculicich, Yang Liu, Pengcheng He, Giuseppe Carenini:
Attend to the Right Context: A Plug-and-Play Module for Content-Controllable Summarization. CoRR abs/2212.10819 (2022) - 2021
- [j3]Pengcheng He, Kehui Sun, Congxu Zhu:
A Novel Image Encryption Algorithm Based on the Delayed Maps and Permutation-Confusion-Diffusion Architecture. Secur. Commun. Networks 2021: 6679288:1-6679288:16 (2021) - [c19]Yuning Mao, Pengcheng He, Xiaodong Liu, Yelong Shen, Jianfeng Gao, Jiawei Han, Weizhu Chen:
Reader-Guided Passage Reranking for Open-Domain Question Answering. ACL/IJCNLP (Findings) 2021: 344-350 - [c18]Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao:
UnitedQA: A Hybrid Approach for Open Domain Question Answering. ACL/IJCNLP (1) 2021: 3080-3090 - [c17]Yuning Mao, Pengcheng He, Xiaodong Liu, Yelong Shen, Jianfeng Gao, Jiawei Han, Weizhu Chen:
Generation-Augmented Retrieval for Open-Domain Question Answering. ACL/IJCNLP (1) 2021: 4089-4100 - [c16]Chen Liang, Simiao Zuo, Minshuo Chen, Haoming Jiang, Xiaodong Liu, Pengcheng He, Tuo Zhao, Weizhu Chen:
Super Tickets in Pre-Trained Language Models: From Model Compression to Improving Generalization. ACL/IJCNLP (1) 2021: 6524-6538 - [c15]Tianwei Zhou, Junrui Lu, Wenwen Zhang, Pengcheng He, Ben Niu:
Irregular Flight Timetable Recovery Under COVID-19: An Approach Based on Genetic Algorithm. DMBD (1) 2021: 240-249 - [c14]Chen Liang, Haoming Jiang, Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, Tuo Zhao:
Token-wise Curriculum Learning for Neural Machine Translation. EMNLP (Findings) 2021: 3658-3670 - [c13]Simiao Zuo, Chen Liang, Haoming Jiang, Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, Tuo Zhao:
ARCH: Efficient Adversarial Regularized Training with Caching. EMNLP (Findings) 2021: 4118-4131 - [c12]Simiao Zuo, Chen Liang, Haoming Jiang, Xiaodong Liu, Pengcheng He, Jianfeng Gao, Weizhu Chen, Tuo Zhao:
Adversarial Regularization as Stackelberg Game: An Unrolled Optimization Approach. EMNLP (1) 2021: 6562-6577 - [c11]Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen:
Deberta: decoding-Enhanced Bert with Disentangled Attention. ICLR 2021 - [i21]Sewon Min, Jordan L. Boyd-Graber, Chris Alberti, Danqi Chen, Eunsol Choi, Michael Collins, Kelvin Guu, Hannaneh Hajishirzi, Kenton Lee, Jennimaria Palomaki, Colin Raffel, Adam Roberts, Tom Kwiatkowski, Patrick S. H. Lewis, Yuxiang Wu, Heinrich Küttler, Linqing Liu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, Sohee Yang, Minjoon Seo, Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Edouard Grave, Ikuya Yamada, Sonse Shimaoka, Masatoshi Suzuki, Shumpei Miyawaki, Shun Sato, Ryo Takahashi, Jun Suzuki, Martin Fajcik, Martin Docekal, Karel Ondrej, Pavel Smrz, Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Sejr Schlichtkrull, Sonal Gupta, Yashar Mehdad, Wen-tau Yih:
NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned. CoRR abs/2101.00133 (2021) - [i20]Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao:
UnitedQA: A Hybrid Approach for Open Domain Question Answering. CoRR abs/2101.00178 (2021) - [i19]Yuning Mao, Pengcheng He, Xiaodong Liu, Yelong Shen, Jianfeng Gao, Jiawei Han, Weizhu Chen:
Reader-Guided Passage Reranking for Open-Domain Question Answering. CoRR abs/2101.00294 (2021) - [i18]Yuhui Wang, Pengcheng He, Xiaoyang Tan:
Greedy Multi-step Off-Policy Reinforcement Learning. CoRR abs/2102.11717 (2021) - [i17]Chen Liang, Haoming Jiang, Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, Tuo Zhao:
Token-wise Curriculum Learning for Neural Machine Translation. CoRR abs/2103.11088 (2021) - [i16]Simiao Zuo, Chen Liang, Haoming Jiang, Xiaodong Liu, Pengcheng He, Jianfeng Gao, Weizhu Chen, Tuo Zhao:
Adversarial Training as Stackelberg Game: An Unrolled Optimization Approach. CoRR abs/2104.04886 (2021) - [i15]Chen Liang, Simiao Zuo, Minshuo Chen, Haoming Jiang, Xiaodong Liu, Pengcheng He, Tuo Zhao, Weizhu Chen:
Super Tickets in Pre-Trained Language Models: From Model Compression to Improving Generalization. CoRR abs/2105.12002 (2021) - [i14]Simiao Zuo, Chen Liang, Haoming Jiang, Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, Tuo Zhao:
ARCH: Efficient Adversarial Regularized Training with Caching. CoRR abs/2109.07048 (2021) - [i13]Pengcheng He, Jianfeng Gao, Weizhu Chen:
DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing. CoRR abs/2111.09543 (2021) - [i12]Yichong Xu, Chenguang Zhu, Shuohang Wang, Siqi Sun, Hao Cheng, Xiaodong Liu, Jianfeng Gao, Pengcheng He, Michael Zeng, Xuedong Huang:
Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention. CoRR abs/2112.03254 (2021) - 2020
- [c10]Xiaodong Liu, Yu Wang, Jianshu Ji, Hao Cheng, Xueyun Zhu, Emmanuel Awa, Pengcheng He, Weizhu Chen, Hoifung Poon, Guihong Cao, Jianfeng Gao:
The Microsoft Toolkit of Multi-Task Deep Neural Networks for Natural Language Understanding. ACL (demo) 2020: 118-126 - [c9]Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, Tuo Zhao:
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization. ACL 2020: 2177-2190 - [c8]Tao Shen, Yi Mao, Pengcheng He, Guodong Long, Adam Trischler, Weizhu Chen:
Exploiting Structured Knowledge in Text via Graph-Guided Representation Learning. EMNLP (1) 2020: 8980-8994 - [c7]Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, Jiawei Han:
On the Variance of the Adaptive Learning Rate and Beyond. ICLR 2020 - [c6]Sewon Min, Jordan L. Boyd-Graber, Chris Alberti, Danqi Chen, Eunsol Choi, Michael Collins, Kelvin Guu, Hannaneh Hajishirzi, Kenton Lee, Jennimaria Palomaki, Colin Raffel, Adam Roberts, Tom Kwiatkowski, Patrick S. H. Lewis, Yuxiang Wu, Heinrich Küttler, Linqing Liu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, Sohee Yang, Minjoon Seo, Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Edouard Grave, Ikuya Yamada, Sonse Shimaoka, Masatoshi Suzuki, Shumpei Miyawaki, Shun Sato, Ryo Takahashi, Jun Suzuki, Martin Fajcik, Martin Docekal, Karel Ondrej, Pavel Smrz, Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Sejr Schlichtkrull, Sonal Gupta, Yashar Mehdad, Wen-tau Yih:
NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned. NeurIPS (Competition and Demos) 2020: 86-111 - [c5]Pengcheng He, Xiaohu Jiang, Qingjiang Shi:
Robust TOA-Based Source Self-Positioning With Clock Imperfection. WCNC 2020: 1-6 - [i11]Xiaodong Liu, Yu Wang, Jianshu Ji, Hao Cheng, Xueyun Zhu, Emmanuel Awa, Pengcheng He, Weizhu Chen, Hoifung Poon, Guihong Cao, Jianfeng Gao:
The Microsoft Toolkit of Multi-Task Deep Neural Networks for Natural Language Understanding. CoRR abs/2002.07972 (2020) - [i10]Xiaodong Liu, Hao Cheng, Pengcheng He, Weizhu Chen, Yu Wang, Hoifung Poon, Jianfeng Gao:
Adversarial Training for Large Neural Language Models. CoRR abs/2004.08994 (2020) - [i9]Tao Shen, Yi Mao, Pengcheng He, Guodong Long, Adam Trischler, Weizhu Chen:
Exploiting Structured Knowledge in Text via Graph-Guided Representation Learning. CoRR abs/2004.14224 (2020) - [i8]Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen:
DeBERTa: Decoding-enhanced BERT with Disentangled Attention. CoRR abs/2006.03654 (2020) - [i7]Yuning Mao, Pengcheng He, Xiaodong Liu, Yelong Shen, Jianfeng Gao, Jiawei Han, Weizhu Chen:
Generation-Augmented Retrieval for Open-domain Question Answering. CoRR abs/2009.08553 (2020)
2010 – 2019
- 2019
- [j2]Guokai Zhang, Weigang Wang, Dinghao Yang, Jihao Luo, Pengcheng He, Yongtong Wang, Ye Luo, Binghui Zhao, Jianwei Lu:
A Bi-Attention Adversarial Network for Prostate Cancer Segmentation. IEEE Access 7: 131448-131458 (2019) - [c4]Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao:
Multi-Task Deep Neural Networks for Natural Language Understanding. ACL (1) 2019: 4487-4496 - [i6]Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao:
Multi-Task Deep Neural Networks for Natural Language Understanding. CoRR abs/1901.11504 (2019) - [i5]Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao:
Improving Multi-Task Deep Neural Networks via Knowledge Distillation for Natural Language Understanding. CoRR abs/1904.09482 (2019) - [i4]