


Остановите войну!
for scientists:
Furu Wei
Person information

Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2022
- [c163]Shengqiang Zhang, Xingxing Zhang, Hangbo Bao, Furu Wei:
Attention Temperature Matters in Abstractive Summarization Distillation. ACL (1) 2022: 127-141 - [c162]Guanhua Chen, Shuming Ma, Yun Chen, Dongdong Zhang, Jia Pan, Wenping Wang, Furu Wei:
Towards Making the Most of Cross-Lingual Transfer for Zero-Shot Neural Machine Translation. ACL (1) 2022: 142-157 - [c161]Ruipeng Jia, Xingxing Zhang, Yanan Cao, Zheng Lin, Shi Wang, Furu Wei:
Neural Label Search for Zero-Shot Multi-Lingual Extractive Summarization. ACL (1) 2022: 561-570 - [c160]Jing Qian, Li Dong, Yelong Shen, Furu Wei, Weizhu Chen:
Controllable Natural Language Generation with Contrastive Prefixes. ACL (Findings) 2022: 2912-2924 - [c159]Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei A. F. Florêncio, Cha Zhang, Furu Wei:
XFUND: A Benchmark Dataset for Multilingual Visually Rich Form Understanding. ACL (Findings) 2022: 3214-3224 - [c158]Tianyu Chen, Hangbo Bao, Shaohan Huang, Li Dong, Binxing Jiao, Daxin Jiang, Haoyi Zhou, Jianxin Li, Furu Wei:
THE-X: Privacy-Preserving Transformer Inference with Homomorphic Encryption. ACL (Findings) 2022: 3510-3520 - [c157]Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei:
SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing. ACL (1) 2022: 5723-5738 - [c156]Junlong Li, Yiheng Xu, Lei Cui, Furu Wei:
MarkupLM: Pre-training of Text and Markup Language for Visually Rich Document Understanding. ACL (1) 2022: 6078-6087 - [c155]Haoyu Song, Li Dong, Weinan Zhang, Ting Liu, Furu Wei:
CLIP Models are Few-Shot Learners: Empirical Studies on VQA and Visual Entailment. ACL (1) 2022: 6088-6100 - [c154]Zewen Chi, Shaohan Huang, Li Dong, Shuming Ma, Bo Zheng, Saksham Singhal, Payal Bajaj, Xia Song, Xian-Ling Mao, Heyan Huang, Furu Wei:
XLM-E: Cross-lingual Language Model Pre-training via ELECTRA. ACL (1) 2022: 6170-6182 - [c153]Damai Dai, Li Dong, Shuming Ma, Bo Zheng, Zhifang Sui, Baobao Chang, Furu Wei:
StableMoE: Stable Routing Strategy for Mixture of Experts. ACL (1) 2022: 7085-7095 - [c152]Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, Furu Wei:
Knowledge Neurons in Pretrained Transformers. ACL (1) 2022: 8493-8502 - [c151]Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu:
Unispeech-Sat: Universal Speech Representation Learning With Speaker Aware Pre-Training. ICASSP 2022: 6152-6156 - [i132]Xu Zhang, Jian Yang, Haoyang Huang, Shuming Ma, Dongdong Zhang, Jinlong Li, Furu Wei:
SMDT: Selective Memory-Augmented Neural Document Translation. CoRR abs/2201.01631 (2022) - [i131]Juncheng Wan, Jian Yang, Shuming Ma, Dongdong Zhang, Weinan Zhang, Yong Yu, Furu Wei:
Phrase-level Adversarial Example Generation for Neural Machine Translation. CoRR abs/2201.02009 (2022) - [i130]Ting Jiang, Shaohan Huang, Zihan Zhang, Deqing Wang, Fuzhen Zhuang, Furu Wei, Haizhen Huang, Liangjie Zhang, Qi Zhang:
PromptBERT: Improving BERT Sentence Embeddings with Prompts. CoRR abs/2201.04337 (2022) - [i129]Yunzhi Yao, Shaohan Huang, Ningyu Zhang, Li Dong, Furu Wei, Huajun Chen:
Kformer: Knowledge Injection in Transformer Feed-Forward Layers. CoRR abs/2201.05742 (2022) - [i128]Xin Sun, Tao Ge, Shuming Ma, Jingjing Li, Furu Wei, Houfeng Wang:
A Unified Strategy for Multilingual Grammatical Error Correction with Pre-trained Cross-Lingual Language Model. CoRR abs/2201.10707 (2022) - [i127]Yuxin Fang, Li Dong, Hangbo Bao, Xinggang Wang, Furu Wei:
Corrupted Image Modeling for Self-Supervised Visual Pre-Training. CoRR abs/2202.03382 (2022) - [i126]Tao Ge, Furu Wei:
EdgeFormer: A Parameter-Efficient Transformer for On-Device Seq2seq Generation. CoRR abs/2202.07959 (2022) - [i125]Da Yin, Li Dong, Hao Cheng, Xiaodong Liu, Kai-Wei Chang, Furu Wei, Jianfeng Gao:
A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models. CoRR abs/2202.08772 (2022) - [i124]Lianzhe Huang, Shuming Ma, Dongdong Zhang, Furu Wei, Houfeng Wang:
Zero-shot Cross-lingual Transfer of Prompt-based Tuning with a Unified Multilingual Prompt. CoRR abs/2202.11451 (2022) - [i123]Jing Qian, Li Dong, Yelong Shen, Furu Wei, Weizhu Chen:
Controllable Natural Language Generation with Contrastive Prefixes. CoRR abs/2202.13257 (2022) - [i122]Hongyu Wang, Shuming Ma, Li Dong, Shaohan Huang, Dongdong Zhang, Furu Wei:
DeepNet: Scaling Transformers to 1, 000 Layers. CoRR abs/2203.00555 (2022) - [i121]Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei:
DiT: Self-supervised Pre-training for Document Image Transformer. CoRR abs/2203.02378 (2022) - [i120]Haoyu Song, Li Dong, Wei-Nan Zhang, Ting Liu, Furu Wei:
CLIP Models are Few-shot Learners: Empirical Studies on VQA and Visual Entailment. CoRR abs/2203.07190 (2022) - [i119]Heming Xia, Tao Ge, Furu Wei, Zhifang Sui:
Lossless Speedup of Autoregressive Translation with Generalized Aggressive Decoding. CoRR abs/2203.16487 (2022) - [i118]Junyi Ao, Ziqiang Zhang, Long Zhou, Shujie Liu, Haizhou Li, Tom Ko, Lirong Dai, Jinyu Li, Yao Qian, Furu Wei:
Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Speech Data. CoRR abs/2203.17113 (2022) - [i117]Shuo Ren, Shujie Liu, Yu Wu, Long Zhou, Furu Wei:
Speech Pre-training with Acoustic Piece. CoRR abs/2204.03240 (2022) - [i116]Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei:
LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking. CoRR abs/2204.08387 (2022) - [i115]Damai Dai, Li Dong, Shuming Ma, Bo Zheng, Zhifang Sui, Baobao Chang, Furu Wei:
StableMoE: Stable Routing Strategy for Mixture of Experts. CoRR abs/2204.08396 (2022) - [i114]Zewen Chi, Li Dong, Shaohan Huang, Damai Dai, Shuming Ma, Barun Patra, Saksham Singhal, Payal Bajaj, Xia Song, Furu Wei:
On the Representation Collapse of Sparse Mixture of Experts. CoRR abs/2204.09179 (2022) - [i113]Sanyuan Chen, Yu Wu, Chengyi Wang, Shujie Liu, Zhuo Chen, Peidong Wang, Gang Liu, Jinyu Li, Jian Wu, Xiangzhan Yu, Furu Wei:
Why does Self-Supervised Learning for Speech Recognition Benefit Speaker Recognition? CoRR abs/2204.12765 (2022) - [i112]Ruipeng Jia, Xingxing Zhang, Yanan Cao, Shi Wang, Zheng Lin, Furu Wei:
Neural Label Search for Zero-Shot Multi-Lingual Extractive Summarization. CoRR abs/2204.13512 (2022) - [i111]Weizhi Wang, Li Dong, Hao Cheng, Haoyu Song, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei:
Visually-Augmented Language Modeling. CoRR abs/2205.10178 (2022) - [i110]Zhixiong Han, Yaru Hao, Li Dong, Furu Wei:
Prototypical Calibration for Few-shot Learning of Language Models. CoRR abs/2205.10183 (2022) - [i109]Tao Ge, Heming Xia, Xin Sun, Si-Qing Chen, Furu Wei:
Lossless Acceleration for Seq2seq Generation with Aggressive Decoding. CoRR abs/2205.10350 (2022) - [i108]Tianyu Chen, Hangbo Bao, Shaohan Huang, Li Dong, Binxing Jiao, Daxin Jiang, Haoyi Zhou, Jianxin Li, Furu Wei:
THE-X: Privacy-Preserving Transformer Inference with Homomorphic Encryption. CoRR abs/2206.00216 (2022) - [i107]Tianyu Chen, Shaohan Huang, Yuan Xie, Binxing Jiao, Daxin Jiang, Haoyi Zhou, Jianxin Li, Furu Wei:
Task-Specific Expert Pruning for Sparse Mixture-of-Experts. CoRR abs/2206.00277 (2022) - [i106]Hangbo Bao, Wenhui Wang, Li Dong, Furu Wei:
VL-BEiT: Generative Vision-Language Pretraining. CoRR abs/2206.01127 (2022) - [i105]Ziqiang Zhang, Junyi Ao, Long Zhou, Shujie Liu, Furu Wei, Jinyu Li:
The YiTrans End-to-End Speech Translation System for IWSLT 2022 Offline Shared Task. CoRR abs/2206.05777 (2022) - [i104]Yaru Hao, Haoyu Song, Li Dong, Shaohan Huang, Zewen Chi, Wenhui Wang, Shuming Ma, Furu Wei:
Language Models are General-Purpose Interfaces. CoRR abs/2206.06336 (2022) - [i103]Chengyi Wang, Yiming Wang, Yu Wu, Sanyuan Chen, Jinyu Li, Shujie Liu, Furu Wei:
Supervision-Guided Codebooks for Masked Prediction in Speech Pre-training. CoRR abs/2206.10125 (2022) - 2021
- [c150]Yaru Hao, Li Dong, Furu Wei, Ke Xu:
Self-Attention Attribution: Interpreting Information Interactions Inside Transformer. AAAI 2021: 12963-12971 - [c149]Jian Yang, Yuwei Yin, Shuming Ma, Haoyang Huang, Dongdong Zhang, Zhoujun Li, Furu Wei:
Multilingual Agreement for Multilingual Neural Machine Translation. ACL/IJCNLP (2) 2021: 233-239 - [c148]Yunzhi Yao, Shaohan Huang, Wenhui Wang, Li Dong, Furu Wei:
Adapt-and-Distill: Developing Small, Fast and Effective Pretrained Language Models for Domains. ACL/IJCNLP (Findings) 2021: 460-470 - [c147]Yu Tang, Long Zhou, Ambrosio Blanco, Shujie Liu, Furu Wei, Ming Zhou, Muyun Yang:
Grammar-Based Patches Generation for Automated Program Repair. ACL/IJCNLP (Findings) 2021: 1300-1305 - [c146]Wenhui Wang, Hangbo Bao, Shaohan Huang, Li Dong, Furu Wei:
MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers. ACL/IJCNLP (Findings) 2021: 2140-2151 - [c145]Yang Xu, Yiheng Xu
, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei A. F. Florêncio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou:
LayoutLMv2: Multi-modal Pre-training for Visually-rich Document Understanding. ACL/IJCNLP (1) 2021: 2579-2591 - [c144]Bo Zheng, Li Dong, Shaohan Huang, Wenhui Wang, Zewen Chi, Saksham Singhal, Wanxiang Che, Ting Liu, Xia Song, Furu Wei:
Consistency Regularization for Cross-Lingual Fine-Tuning. ACL/IJCNLP (1) 2021: 3403-3417 - [c143]Zewen Chi, Li Dong, Bo Zheng, Shaohan Huang, Xian-Ling Mao, Heyan Huang, Furu Wei:
Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment. ACL/IJCNLP (1) 2021: 3418-3430 - [c142]Yuekai Zhao, Li Dong, Yelong Shen, Zhihua Zhang, Furu Wei, Weizhu Chen:
Memory-Efficient Differentiable Transformer Architecture Search. ACL/IJCNLP (Findings) 2021: 4254-4264 - [c141]Yaru Hao, Li Dong, Hangbo Bao, Ke Xu, Furu Wei:
Learning to Sample Replacements for ELECTRA Pre-Training. ACL/IJCNLP (Findings) 2021: 4495-4506 - [c140]Shuo Ren, Long Zhou, Shujie Liu, Furu Wei, Ming Zhou, Shuai Ma:
SemFace: Pre-training Encoder and Decoder with a Semantic Interface for Neural Machine Translation. ACL/IJCNLP (1) 2021: 4518-4527 - [c139]Xin Sun
, Tao Ge, Furu Wei, Houfeng Wang:
Instantaneous Grammatical Error Correction with Shallow Aggressive Decoding. ACL/IJCNLP (1) 2021: 5937-5947 - [c138]Nan Yang, Furu Wei, Binxing Jiao, Daxing Jiang, Linjun Yang:
xMoCo: Cross Momentum Contrastive Learning for Open-Domain Question Answering. ACL/IJCNLP (1) 2021: 6120-6129 - [c137]Guanhua Chen, Shuming Ma, Yun Chen, Li Dong, Dongdong Zhang, Jia Pan, Wenping Wang, Furu Wei:
Zero-Shot Cross-Lingual Transfer of Neural Machine Translation with Multilingual Pretrained Encoders. EMNLP (1) 2021: 15-26 - [c136]Wangchunshu Zhou, Tao Ge, Canwen Xu, Ke Xu, Furu Wei:
Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting. EMNLP (1) 2021: 571-582 - [c135]Zewen Chi, Li Dong, Shuming Ma, Shaohan Huang, Saksham Singhal, Xian-Ling Mao, Heyan Huang, Xia Song, Furu Wei:
mT6: Multilingual Pretrained Text-to-Text Transformer with Translation Pairs. EMNLP (1) 2021: 1671-1683 - [c134]Bo Zheng, Li Dong, Shaohan Huang, Saksham Singhal, Wanxiang Che, Ting Liu, Xia Song, Furu Wei:
Allocating Large Vocabulary Capacity for Cross-Lingual Language Model Pre-Training. EMNLP (1) 2021: 3203-3215 - [c133]Zilong Wang, Yiheng Xu, Lei Cui, Jingbo Shang, Furu Wei:
LayoutReader: Pre-training of Text and Layout for Reading Order Detection. EMNLP (1) 2021: 4735-4744 - [c132]Jiaqi Bai, Long Zhou, Ambrosio Blanco, Shujie Liu, Furu Wei, Ming Zhou, Zhoujun Li:
Jointly Learning to Repair Code and Generate Commit Message. EMNLP (1) 2021: 9784-9795 - [c131]Canwen Xu, Wangchunshu Zhou, Tao Ge, Ke Xu, Julian J. McAuley, Furu Wei:
Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression. EMNLP (1) 2021: 10653-10659 - [c130]Chengyi Wang, Yu Wu, Yao Qian, Ken'ichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang:
UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data. ICML 2021: 10937-10947 - [c129]Canwen Xu, Wangchunshu Zhou, Tao Ge, Ke Xu, Julian J. McAuley, Furu Wei:
Blow the Dog Whistle: A Chinese Dataset for Cant Understanding with Common Sense and World Knowledge. NAACL-HLT 2021: 2139-2145 - [c128]Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, Ming Zhou:
InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training. NAACL-HLT 2021: 3576-3588 - [c127]Jian Yang, Juncheng Wan, Shuming Ma, Haoyang Huang, Dongdong Zhang, Yong Yu, Zhoujun Li, Furu Wei:
Learning to Select Relevant Knowledge for Neural Machine Translation. NLPCC (1) 2021: 79-91 - [c126]Jian Yang, Shuming Ma, Haoyang Huang, Dongdong Zhang, Li Dong, Shaohan Huang, Alexandre Muzio, Saksham Singhal, Hany Hassan, Xia Song, Furu Wei:
Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task. WMT@EMNLP 2021: 446-455 - [i102]Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei:
Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting. CoRR abs/2101.00416 (2021) - [i101]Chengyi Wang, Yu Wu, Yao Qian, Ken'ichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang:
UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data. CoRR abs/2101.07597 (2021) - [i100]Canwen Xu, Wangchunshu Zhou, Tao Ge, Ke Xu, Julian J. McAuley, Furu Wei:
Blow the Dog Whistle: A Chinese Dataset for Cant Understanding with Common Sense and World Knowledge. CoRR abs/2104.02704 (2021) - [i99]Zewen Chi, Li Dong, Shuming Ma, Shaohan Huang, Xian-Ling Mao, Heyan Huang, Furu Wei:
mT6: Multilingual Pretrained Text-to-Text Transformer with Translation Pairs. CoRR abs/2104.08692 (2021) - [i98]Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Furu Wei:
Knowledge Neurons in Pretrained Transformers. CoRR abs/2104.08696 (2021) - [i97]Guanhua Chen, Shuming Ma, Yun Chen, Li Dong, Dongdong Zhang, Jia Pan, Wenping Wang, Furu Wei:
Zero-shot Cross-lingual Transfer of Neural Machine Translation with Multilingual Pretrained Encoders. CoRR abs/2104.08757 (2021) - [i96]Yiheng Xu
, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florêncio, Cha Zhang, Furu Wei:
LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding. CoRR abs/2104.08836 (2021) - [i95]Yuekai Zhao, Li Dong, Yelong Shen, Zhihua Zhang, Furu Wei, Weizhu Chen:
Memory-Efficient Differentiable Transformer Architecture Search. CoRR abs/2105.14669 (2021) - [i94]Shengqiang Zhang, Xingxing Zhang, Hangbo Bao, Furu Wei:
Attention Temperature Matters in Abstractive Summarization Distillation. CoRR abs/2106.03441 (2021) - [i93]Xin Sun
, Tao Ge, Furu Wei, Houfeng Wang:
Instantaneous Grammatical Error Correction with Shallow Aggressive Decoding. CoRR abs/2106.04970 (2021) - [i92]Tengchao Lv, Lei Cui, Momcilo Vasilijevic, Furu Wei:
VT-SSum: A Benchmark Dataset for Video Transcript Segmentation and Summarization. CoRR abs/2106.05606 (2021) - [i91]Zewen Chi, Li Dong, Bo Zheng, Shaohan Huang, Xian-Ling Mao, Heyan Huang, Furu Wei:
Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment. CoRR abs/2106.06381 (2021) - [i90]Bo Zheng, Li Dong, Shaohan Huang, Wenhui Wang, Zewen Chi, Saksham Singhal, Wanxiang Che, Ting Liu, Xia Song, Furu Wei:
Consistency Regularization for Cross-Lingual Fine-Tuning. CoRR abs/2106.08226 (2021) - [i89]Hangbo Bao, Li Dong, Furu Wei:
BEiT: BERT Pre-Training of Image Transformers. CoRR abs/2106.08254 (2021) - [i88]Yunzhi Yao, Shaohan Huang, Wenhui Wang, Li Dong, Furu Wei:
Adapt-and-Distill: Developing Small, Fast and Effective Pretrained Language Models for Domains. CoRR abs/2106.13474 (2021) - [i87]Yaru Hao, Li Dong, Hangbo Bao, Ke Xu, Furu Wei:
Learning to Sample Replacements for ELECTRA Pre-Training. CoRR abs/2106.13715 (2021) - [i86]Shuming Ma, Li Dong, Shaohan Huang, Dongdong Zhang, Alexandre Muzio, Saksham Singhal, Hany Hassan Awadalla, Xia Song, Furu Wei:
DeltaLM: Encoder-Decoder Pre-training for Language Generation and Translation by Augmenting Pretrained Multilingual Encoders. CoRR abs/2106.13736 (2021) - [i85]Zewen Chi, Shaohan Huang, Li Dong, Shuming Ma, Saksham Singhal, Payal Bajaj, Xia Song, Furu Wei:
XLM-E: Cross-lingual Language Model Pre-training via ELECTRA. CoRR abs/2106.16138 (2021) - [i84]Zilong Wang, Yiheng Xu, Lei Cui, Jingbo Shang, Furu Wei:
LayoutReader: Pre-training of Text and Layout for Reading Order Detection. CoRR abs/2108.11591 (2021) - [i83]Canwen Xu, Wangchunshu Zhou, Tao Ge, Ke Xu, Julian J. McAuley, Furu Wei:
Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression. CoRR abs/2109.03228 (2021) - [i82]Shusheng Xu, Xingxing Zhang, Yi Wu, Furu Wei:
Sequence Level Contrastive Learning for Text Summarization. CoRR abs/2109.03481 (2021) - [i81]Bo Zheng, Li Dong, Shaohan Huang, Saksham Singhal, Wanxiang Che, Ting Liu, Xia Song, Furu Wei:
Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training. CoRR abs/2109.07306 (2021) - [i80]Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei A. F. Florêncio, Cha Zhang, Zhoujun Li, Furu Wei:
TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models. CoRR abs/2109.10282 (2021) - [i79]Jiaqi Bai, Long Zhou, Ambrosio Blanco, Shujie Liu, Furu Wei, Ming Zhou, Zhoujun Li:
Jointly Learning to Repair Code and Generate Commit Message. CoRR abs/2109.12296 (2021) - [i78]Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu:
UniSpeech-SAT: Universal Speech Representation Learning with Speaker Aware Pre-Training. CoRR abs/2110.05752 (2021) - [i77]Junyi Ao, Rui Wang, Long Zhou, Shujie Liu, Shuo Ren, Yu Wu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei:
SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing. CoRR abs/2110.07205 (2021) - [i76]Junlong Li, Yiheng Xu, Lei Cui, Furu Wei:
MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding. CoRR abs/2110.08518 (2021) - [i75]Guanhua Chen, Shuming Ma, Yun Chen, Dongdong Zhang, Jia Pan, Wenping Wang, Furu Wei:
Towards Making the Most of Multilingual Pretraining for Zero-Shot Neural Machine Translation. CoRR abs/2110.08547 (2021) - [i74]Ting Jiang, Shaohan Huang, Zihan Zhang, Deqing Wang, Fuzhen Zhuang, Furu Wei, Haizhen Huang, Liangjie Zhang, Qi Zhang:
Improving Non-autoregressive Generation with Mixup Training. CoRR abs/2110.11115 (2021) - [i73]Hangbo Bao, Li Dong, Wenhui Wang, Nan Yang, Furu Wei:
s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning. CoRR abs/2110.13640 (2021) - [i72]Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei:
WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing. CoRR abs/2110.13900 (2021) - [i71]Wangyou Zhang, Zhuo Chen, Naoyuki Kanda, Shujie Liu, Jinyu Li, Sefik Emre Eskimez, Takuya Yoshioka, Xiong Xiao, Zhong Meng, Yanmin Qian, Furu Wei:
Separating Long-Form Speech with Group-Wise Permutation Invariant Training. CoRR abs/2110.14142 (2021) - [i70]Jian Yang, Shuming Ma, Haoyang Huang, Dongdong Zhang, Li Dong, Shaohan Huang, Alexandre Muzio, Saksham Singhal, Hany Hassan Awadalla, Xia Song, Furu Wei:
Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task. CoRR abs/2111.02086 (2021) - [i69]Wenhui Wang, Hangbo Bao, Li Dong, Furu Wei:
VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts. CoRR abs/2111.02358 (2021) - [i68]Lei Cui, Yiheng Xu, Tengchao Lv, Furu Wei:
Document AI: Benchmarks, Models and Applications. CoRR abs/2111.08609 (2021) - [i67]Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo:
Swin Transformer V2: Scaling Up Capacity and Resolution. CoRR abs/2111.09883 (2021) - [i66]Zekun Wang, Wenhui Wang, Haichao Zhu, Ming Liu, Bing Qin, Furu Wei:
Distilled Dual-Encoder Model for Vision-Language Understanding. CoRR abs/2112.08723 (2021) - 2020
- [j19]Qingyu Zhou
, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, Tiejun Zhao:
A Joint Sentence Scoring and Selection Framework for Neural Extractive Document Summarization. IEEE ACM Trans. Audio Speech Lang. Process. 28: 671-681 (2020) - [c125]Zewen Chi, Li Dong, Furu Wei, Wenhui Wang, Xian-Ling Mao, Heyan Huang:
Cross-Lingual Natural Language Generation via Pre-Training. AAAI 2020: 7570-7577 - [c124]Yinuo Guo, Tao Ge, Furu Wei:
Fact-Aware Sentence Split and Rephrase with Permutation Invariant Training. AAAI 2020: 7855-7862 - [c123]Zhongli Li, Wenhui Wang, Li Dong, Furu Wei, Ke Xu:
Harvesting and Refining Question-Answer Pairs for Unsupervised QA. ACL 2020: 6719-6728 - [c122]Minghao Li, Yiheng Xu
, Lei Cui, Shaohan Huang, Furu Wei, Zhoujun Li, Ming Zhou:
DocBank: A Benchmark Dataset for Document Layout Analysis. COLING 2020: 949-960 - [c121]Shaohan Huang, Furu Wei, Lei Cui, Xingxing Zhang, Ming Zhou:
Unsupervised Fine-tuning for Text Clustering. COLING 2020: 5530-5534 - [c120]Qingyu Zhou
, Furu Wei, Ming Zhou:
At Which Level Should We Extract? An Empirical Analysis on Extractive Document Summarization. COLING 2020: 5617-5628 - [c119]Chaoqun Duan, Lei Cui, Shuming Ma, Furu Wei, Conghui Zhu, Tiejun Zhao:
Multimodal Matching Transformer for Live Commenting. ECAI 2020: 1998-2005 - [c118]Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, Jianfeng Gao:
Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks. ECCV (30) 2020: 121-137 - [c117]Wangchunshu Zhou, Tao Ge, Chang Mu, Ke Xu, Furu Wei, Ming Zhou:
Improving Grammatical Error Correction with Machine Translation Pairs. EMNLP (Findings) 2020: 318-328 - [c116]